Security Monitoring: A Possible New Way to Detect Privilege Escalation

The problem that most defense mechanisms have in detecting the adversary is that they tend to be focused on detecting the tools far more so than detecting the results. There are reasons for this, the most obvious being that it is very easy for there to be false positives within the results, and as such, we don’t want our AV products to become denial of service tools. As it is, many of these products have caused extensive downtime in organizations due to ‘detecting’ something that wasn’t bad.  Unfortunately, that makes life for attackers fairly easy. It’s not hard, for instance, to recompile a publicly available attack tool so that it avoids AV detection. If you don’t believe me, read the “Detecting Mimikatz” section of this article.

This is the premise behind the Security Monitoring Management Pack in SCOM. Simply alerting for the sake of alerting generates a lot of noise, but if it was possible to detect something unique to an attacker, then we have the ability to respond in real time (assuming of course, the organization responds to alerts with something more than an email that no one reads)…

This is where some of the new audit capabilities of Server 2016 and Windows 10 come in to play. It’s worth noting that the method I’ll describe below is not replicated on my Server 2008 system in the same domain because this is a new feature. However, it is potentially a powerful one as it exploits the ability to audit a basic function needed for credential theft: namely debug privileges. Even for standard administrators, debug privileges are not needed, and as such they are not assigned to an administrative token. For credential theft, however, this is required (side note, this is something that is occasionally needed outside of credential theft, but elevating this permission outside of WMI is not something that should happen very often, if at all, in a production environment). That said, because administrators are God to the computer, any administrator can effectively elevate their token to grant themselves debug rights when needed. This is why tools such as WCE or Mimikatz require administrative rights. Their users effectively need to assign themselves SeDebugPrivilege in order to mine the LSA for your credentials. New features, such as Credential Guard, make this much more difficult to do, but one should not rest on these new feature sets. There’s no such thing as a 100% secure environment.

Enter the new method:

First, this will require a GPO. The “Audit Token Right Adjusted” audit event will need to be set.  Documentation for this setting can be found here. This is part of the Advanced Audit Policy Configuration under “Detailed Tracking”.

image

This will start generating 4703 events. It’s worth noting that 4703 will be a fairly common event once enabled. They are generated at logon and at various uses on a system.  Security features such as user account control practically require it. So simply searching for a 4703 is a bad idea. However, this does allow us to look for events unique to the bad guy. In the screen shot below, I used Mimikatz to elevate my token from an administrator to debug rights. This is accomplished directly via Mimikatz command line.

image

As noted before, the process name can potentially change, but we can clearly see when a token is escalated to the privilege necessary to attempt to mine the LSA. For those following along at home, this can be accomplished via the SCOM console with a simple rule looking for Event ID 4703 and Parameter 11 = SeDebugPrivilege.  That’s it.

Update: This is off in the next security monitoring release, though it is available to be turned on. Currently, there seems to be noise from SCOM itself as related to SCOM 2016 as well as the new Windows Server MPs.

Using SCOM to Detect Failed Pass the Hash attacks (Part 2)

A couple weeks back, I wrote a piece on creating some rules to potentially detect pass the hash attacks in your environment. This is the second article in this series, and if time permits one of many more I hope to do over the next year or so on using SCOM to detect active threats in an environment.  To start, I want to apologize on the delay, my lab crashed on me and I had to spend way too much time fixing it.

Today we will discuss looking at failure attempts.  When an attacker compromises a machine, they are able to use windows credential editor or mimikatz to enumerate the users that are logged on as well as the hashes stored inside of the LSA. What they do not necessarily know is what permissions said accounts have; as such, they only thing they can do is attempt to login with these stored accounts until they find one that works, giving them complete access to the victim’s environment. They accomplish this by moving laterally throughout the organization, desktop to desktop or server to server until they finally steal credentials of a domain admin account, giving them the keys to the kingdom. That reason alone is why domain admin accounts should never sign on to a desktop, nor should they sign on to a server or run as a service.  On average, it takes an attacker approximately 2 days to go from infiltrating a desktop, to stealing the keys to the kingdom.  It also takes the victim the better part of a year to realize that they’ve been owned.

As a reminder, the goal here is to avoid noise generated by normal events. I’ve seen several implementations of security monitoring in SCOM which do nothing but generate thousands of alerts that no one will look at. Alert management is the most difficult aspect to a SCOM environment, and even with good processes in place, asking a person or team to sift through hundreds, if not thousands, of alerts generated by standard, every-day activity. This accomplishes nothing but re-enforcing the check-the-monitoring-box mentality that so many organizations already have.

As for the experiment, I’m going to use my compromised system to steal accounts, much like I did in part 1.  The big difference is that instead of the account having the domain admin rights that the attacker desires, it will be nothing more than a standard account which has no access to other machines in the environment. The process is pretty much the same as what I did in part 1, and as such the rule I created to look for credential swap flagged immediately.  That’s good.  And, as expected, I see the following result when launching a remote psexec to my server.

image

Essentially, I could not move laterally in this scenario, but how his looks surprised me.  I expected this to lead some failures (4625 events) that can be tracked.  However, this turns out to not be the case.  For one, my tier 1 monitor that I setup in part 1 of this series flagged. I was not expecting that, and sure enough, I see 4624 events on my SCOM server for the user account whose credentials I used.  It was followed immediately by a 4634 event indicating the session was destroyed. There were no 4625 events on this server, nor were there any on my domain controller. I’m not quite sure why authentication is handled in this capacity, but then again, I’ve never really looked at it this closely.  Unfortunately, this does me little good as it is a very common sequence of events, but since my other rules still alerted in this scenario, I have visibility into this attempt.  I attempted a few other means with similar results.  I’m still generating a 4624, with the only difference being the 4634 that follows immediately after it.  Attacking a DC with a non-DA account yielded the same results.  There was a 4624 followed immediately by a 4634 event, and the 4624 was picked up by my previous rules, so there’s no need to create another rule.

I suspect that Rule 2 in my previous post will likely be the noisiest.  I had to turn it off in my lab for SQL servers and DCs due to standard traffic. That’s fine, as it still has some value, but I suspect that some types of normal front end to back end type communication will trigger this.  That doesn’t do any favors to the person or team responsible for security monitoring.  An occasional false positive is acceptable, but at the end of the day, I prefer my alerts to be actionable every time. If they are not, SCOM quickly turns into that tool that no one uses because everything that comes out of it is worthless.  This brings me to another point. If you’re having problems with Rule 2 in particular (though 3 may come into play here too), then consider Parameter 19 a bit closer.  That parameter contains the source IP address.  While I wouldn’t consider it to be the best practice, a tier 1 environment may have service accounts and applications that connect to other systems within tier 1. Lots of that is probably not a good thing, but it is somewhat normal for a front end/back end configuration.  Any type of web server/DB is likely to trigger false positives.  However, the anatomy of an attack is a bit different.

Attackers rarely get to start at the server level or the DC level.  They almost always start in tier 2. Security professionals refer to this as “assumed breach.”  Simply put, no matter how much you train people, roughly 10% of your environment is not going to verify the source but will instead click on the crazy cat link or whatever the popular meme of the day is. Security teams unfortunately cannot stop this.  That said, this is also your attacker’s entry point into the environment. Chances are good that someone is sitting in your tier 2 environment right now, because getting there through one of the myriad of flash or java vulnerabilities is pretty easy to do.  But that also gives us a unique way to search for attackers, because SCOM can use wild cards.  Since Parameter 19 contains an IP address, you can make these rules and use wildcards to filter down the results so that you’re getting alerts when these anomalies are detected from a tier 2 IP address accessing a tier 1 or tier 0 system.  Other than your IT staff, this shouldn’t be happing at all.  This would have to be customized to an environment, but this is not something that would be terribly difficult to do.

One final note, I’m working on a management pack that contains these rules along with additional security rules along with other security related items that SCOM can provide. My only test environment is my lab, which is hardly a production grade environment. I do welcome feedback. While I cannot support this management pack, I can provide it if this is something you are interested in trying it out. My main goal is to keep the noise down to a minimum so that each alert is actionable. While that is not always easy to do, trying this out in other environments will go along way to getting it to that point.  If this is something you are interested in testing, please hit me up on linked in.

Using SCOM to Detect Successful Pass the Hash attacks (Part 1)

Part 2 is here.

Those that know me know I’ve been using my free time to mess around with the idea of being able to use SCOM to help in identifying when an advanced persistent threat is active in your environment.  This is a problem that most IT organizations have given that the average attacker isn’t discovered until more than 250 days after they owned the environment. Many are never found.  Part of the problem associated with this is the massive amount of log information that needs to be parsed in order to determine an active presence in the environment. There are products you can buy such as Microsoft ATA, Splunk, or forwarding log information to Azure and using OMS.  Products like these can be expensive, but in the same token much better at log analytics than a tool like SCOM. That said, my goal is to create a poor mans solution to identifying a possible PtH in progress event. I’ll do so by seeing what is generated when I reproduce an event in my lab.  This entry will cover successful elevation attempts.  My next entry will cover my attempt to detect an attacker who is attempting to elevate with a non-DA account.

To start, I download all the necessary tools to a machine. On the same machine, I’ve made a standard domain account local admin on the machine. This is because a pass the hash attack requires having local admin rights to the machine in order to read from the LSA. To be clear, for the average attacker, getting local access to a machine, any machine, is easy to do. Typically this starts at tier 2 with a targeted fishing attack, and despite the fact that we try and educate users to never open up that email from a non-trusted source, they do it anyways, roughly to the tune of about 11% of users. I’m not bothering with this piece as I’m assuming that an attacker can get to this point fairly easily… reality is they can.

Step 1 – switching credentials:

The first thing I’ve done is to simply execute mimikatz and launch a local command shell under a different set of creds than what I’m running under. My user account that I’m signed with with is “test”.  I have a domain admin on another session, and unbeknownst to this DA, my test account is compromised.  This is straight forward:

image

I grabbed the hash and launched a command shell. It appears that this generates traffic.  Using mimikatz to launch a command line under a domain admin’s credentials generates this:image

Each of these items are parameterized, which makes it somewhat easy to craft a rule in SCOM. The trick is making sure that the events in question are unique to this type of an attack. If they aren’t, all I’ve done is create a whole bunch of noise that will be ignored.  As luck would have it, the LogonProcessName and LogonType fields are distinctly different from the average 4624 event in my environment.  Let’s hold on to that thought for the moment.

Step 2 – Lateral Movement:

Now, I’m going to use those credentials to hit another machine.  This is why PtH attacks are of such concern.  This is easy.  From the command prompt that opened, I simply launched a psexec from the new shell to a remote system.  My logged on user has no rights to this system.  However, I’m in.

image

I’d note that I’m trying this on my SCOM server, but moving to my DC in step 3 was just as easy, though it did manage to kick off my logged on user as an unexpected result.  Same command, different machine. My generic user account now owns my environment.  I found this event on my SCOM machine’s security log:

image

The XML view is a bit more complex as the impersonation level for whatever reason doesn’t translate properly. Instead of seeing “Impersonation” in the XML, I simply see a code (%%1833).

image

That’s fine.  That code unfortunately is not unique to this type of a movement.

Step 3 – The DC:

On the DC, I see the pictures below.  The problem is that I see a bunch of these, so at this point I’m going to have to configure some sort of alert flood protection.  The other odd behavior here is that the impersonation level on the DC is set to Delegation, whereas on the member server, it was simply Impersonation.

image image

The other part is that there isn’t much for bread crumbs. The impersonation level is “Delegation”, but this is hardly uncommon for 4624 events. It does, however, at least in my small sample limited view, appear to be unusual for a domain admin to sign on to a machine with an impersonation level of delegation.  I could be wrong, and that’s part of why I’m publishing this.  Computer accounts commonly have this type of impersonation, but not user accounts. This will (hopefully) give me something unique to create in SCOM.

Now that we can see the behavior of this attack, we can potentially monitor for it.  DISCLAIMER:  This is me in my lab. I’m writing this in part for my own benefit and in part because my lab is not a production environment. My goal here is to be able to monitor for an attack in progress, but to do so in a way that does not generate noise. I cannot emphasize enough that your organization will need a good alert management process in order to actually respond properly to these alerts. I’m hoping that some people with some better lab environments as well as a security background can potentially reproduce this as well as verify that the noise level in a quiet state is low.

So on to the rules.  I want to test this in some environments other than my lab to see if this holds up. It’s quite possible that it does not and instead either generates a lot of noise or doesn’t fire in certain circumstances.  Feel free to add comments with your own result.  The rule is straight forward.

Rule 1: Monitoring the DC for Step 3 related events:

Target: Active Directory 2008 DC Computers

The rule type is NT Event.  Here’s a screenshot of the parameters:

image

Parameter 9 is the logon type, parameter 21 is the impersonation level, and parameter 6 is specifically ignoring these events if there’s a $ symbol in them (which is true in the case of a machine account doing impersonation).  I configure alert flood via the Source Network address (parameter 19) as well as a filter to make sure it’s not catching any local authentication. I’m not sure if that’s the right answer or not, but this should keep this to one IP address. If someone is hopping from destination to destination, this would show as multiple alerts. The flip side is that if they sit on one system and hit many, it shows only one alert. I’m not sure there’s an easy way to configure alert flood for this given that this event shows up multiple times on a DC with only one login attempt.

On the same token, we can configure something similar against the server OS to capture the events seen when an account does side to side movement in a tier. If you’re forwarding security events to an event collector, we should be able to create a similar rule there.

One observation in my lab is the domain admin logons via RDP will generate this alert, while standard users via RDP do not.  As a rule, you probably shouldn’t be using a DA account for much of anything, but this can potentially generate false positives. I’d love additional feedback on this particular rule.

Rule 2: Monitoring the Member Servers for Lateral Walk (step 2):

Target:  Windows Server Operating System

Like the other rule, it is an alert generating NT event rule targeting the security log.

image

Parameter 9 is the logon type and Parameter 21 is the impersonation code.  Parameter 19 is filtering out the local IP address.  Due to noise, I had to filter out a few additional things. I excluded anything with ANONYMOUS in it as DCs see this type of logon event for the SYSTEM account under normal conditions. I also filtered by the $ character as local machine accounts authenticate in this manner. My SQL server also lit up this one due to normal traffic, as such I created an override to turn this off for the SQL computers group that is created by the SQL management pack. You must have the SQL MP installed in order to override this. Unfortunately, this means you cannot detect this condition on a SQL server.  However, we have plenty of other events to target. I also had to disable this against domain controllers for the same reason, though this wasn’t nearly as noisy. I needed to include Kerberos as RDP sessions will generate this event under an NTLM connection. As well, I configured alert suppression for this rule via parameter 19. This event appears more than once on a targeted system.

Rule 3: Monitoring for a credential swap (step 1):

Target:  Windows Server Operating System.

As with the other rules, we are targeting the security log.

image

Parameter 9 is logon type.  Parameter 10 is the process name.  Parameter 11 is the authentication package.

The end result at this point in my lab is a very quiet set of targeted monitors that can detect the crumbs left behind when an attacker penetrates the environment. This test was only in my lab, so at this point, please feel free to let me know via the comments if you can replicate this or if your production environments are picking up noise that I’m not seeing in my lab. The goal is to leave a user with alerts that are actionable. I can provide the MP I’m developing (though of note I’m doing other things in here as well). If this is something you are interested in testing, please hit me up on linked in.