Security Monitoring–Using SCOM to Detect Executables Run in Writeable OS Directories Part 2

You can find part 1 here. You can find part 3 here.

***Please Read This First***

I need to preface this article by simply saying that this is the type of thing that needs to be thought through before simply turning on. This is mainly due to the fact that this next security monitoring solution could potentially create A LOT of objects in SCOM. I did have a brief chat with Kevin Holman about this to confirm my own concerns before publishing. It’s worth noting that a large number of objects isn’t necessarily a bad thing if there are no monitors attached. There are not in this case, but as a side note that also implies that we should not be targeting the classes this MP creates with monitors. That would be bad. Likewise, larger environments should be careful when rolling something like this out. The solution (in my lab) created about 20 objects per server. In a small environment of a couple hundred servers, this isn’t a big deal, but if you’re monitoring 5000 servers, you just created a hundred thousand objects. More objects can equate to performance issues as well as database bloat. Again, there are no monitors targeted at the class created, so performance impact should be minimal. That said, test this carefully in a big environment or roll it out selectively to critical assets.

***Thank you***

Now on to the solution. We discussed previously the issue that the problem with monitoring critical OS writeable directories is that they can be different for just about every OS. Fortunately, someone smarter than me already did most of the work. What I’ve basically done is to borrow a portion of Aaron Margosis’ “AaronLocker” solution for AppLocker and repurpose it as a SCOM discovery. Simply put, I’ve updated Security Monitoring to discover these file locations. To be clear, I do think that App Locker is the right answer here, though it can be bypassed (we have some detection for that), and it does take a bit of effort to get setup. But in terms of actually locking down these directories, this is the right answer. That said, this is not a solution that is on by default. You’ll need to do some configuration in SCOM as well as deploy a sysinternals tool to servers that you want to use.

Step 1 – Download AccessChk.exe and deploy it to the servers you want to monitor. For the record, I only tested copying this to the Windows\System32 directory, though I suspect it will work in any directory that has an environment variable configured.

Step 2 – You will need to turn this discovery on. I’ve made this fairly easy to do. I’ve written in a registry discovery that is on by default. It looks for a specific registry key HKLM\Software\SCOMSecurityMonitoringMP\DiscoverUserWriteableLocations. Once this key is created, a seed class will be discovered which will kick off this script. The registry discovery runs somewhat frequently, but the script based discovery is set to run once a day. I’ve made this fairly easy to do. Simply go to the “Windows Computer” view and use the Security Monitoring tasks to create the key:

image

The registry is protected, so I suspect you’ll need to enter credentials to create these keys. Alternatively, you could use a tool like SCCM or a GPO to do this. This is what you’ll see:

image

Note that if you use the Remove task, it will trigger an undiscover and effectively remove the new objects.

Step 3 – At this point, you’re largely done. I’ve set the script to use the %windir%, %ProgramFiles%, and %ProgramFiles% (x86) short cuts. This means that for a standard OS with one drive, you’re done. That said, some organizations carve out multiple disks, and as such, these short cuts don’t catch everything. I’ve built in a solution to the discovery to address this. You’ll need to locate the “Security Monitoring: Discover Writeable File Locations” discovery.

image

From here, you need to do an override. There’s a box called “AdditionalLocations” that can be passed into Aaron’s script. This is a comma delimited list. All I did was create an array and enter the contents with a split by a comma. I do recommend putting quotes around your path (i.e. ‘D:\program files’,’D:\Program Files(x86)’).

image

Now you are done. Your results should look something like this:

image

In order to minimize noise, I’ve done a couple other things. The rule described in part 1 of this article is disabled for all members of the seed class you defined. In place, a new rule is turned on. It’s a bit more simple, looking for the 4688 event ID, .exe in param 6, and requiring a match with the FolderPath class in parameter 6. It will look something like this:

image

A couple notes. I’ve built some alert suppression into both of these alerts. This will filter by the logging computer. This should prevent alert spam. You’ll only get a high repeat count in this scenario. For the rule in this part, you can override for specific objects, so if you have an app that executes in C:\windows\temp (this is bad practice by the way), you can override the specific object on the specific machine.

I’m also adding the rule described in part 1 to the forwarded events detection. If you’re forwarding 4688 events to a Windows Event Collector server, it will generate an alert.

Securing SCOM in a Privilege Tiered Access Model–Part 2

Previously, I discussed basic security posture and what is needed to secure a SCOM installation. The post can be found here. In summary, we discussed risks associated with malicious management packs and the use of a service account for agent action instead of the local system. This discussion will focus a bit deeper on account management.

Carefully plan Run-As account distribution

In my opinion, poor run as account distribution practices poses the greatest risk to your environment, as a poorly distributed account could potentially give an attacker the keys to your environment. The first thing worth noting about run as accounts is that they need to be able to logon locally. This effectively means that the account’s credentials are sitting in memory on any server that it was distributed to. I demonstrated this particular risk in this piece, and I recommend reading it before planning a SCOM installation.  Server 2016 has mitigated many risks associated with pass the hash, but older operating systems do not have the same mitigations in place, and as such, they are exposed. Keep in mind that it only takes one compromised server to compromise a tier. If you have a super account running on Server 2008, I can collect that hash and still use it to access a more secure 2016 system. The OS mitigations in place will prevent me from collecting additional hashes off that system, but once I’m on it, I can still do whatever I want with the system.

In the tiered structure, you don’t want Tier 0 accounts being used on Tier 1. In short, this means no Domain Admins logging on to anything that is not a domain controller. That’s simple enough. The AD MP doesn’t need a DA run as account anyways, so the only issue at hand is finding a method to patch/upgrade the agent on a domain controllers.

Tier 1, however is a bit more complicated. This is your server tier. Many organizations (and I’ve been guilty of this in the past as well) have a handful of super accounts that are local admins on every server in the environment. If any of those accounts are used as a run as account and distributed anywhere, this means that this account could potentially be harvested. All an attacker needs is local admin rights on one server that has this type of account running and your entire tier 1 environment is compromised. This is, as far as I’m concerned, just as bad as compromising tier 0. The attacker effectively has all of your data and access to any server in your environment. Even without domain admin rights, they will be able go about their business. In the tiered model, there should be very few of these types of accounts, and their use should be restricted from the management network (aka Red Forest). Other accounts in Tier 1 need to be restricted to only the machines that they need to run on.

As such, my general opinion is to stay as far away from using run as accounts as possible. For most of our management packs, this is not an issue. However, some MPs (SQL and SharePoint for instance), need them, and SharePoint does not even have an option for least privilege.  The first thing I’d recommend is using NT Service SIDS in their place. I know this works for SQL, as Kevin Holman has a great article on how to do this (though I highly recommend using the least privilege configuration and not SA rights). The Health Service SID effectively gives the local system’s health service the minimum permissions needed to monitor a SQL environment. The health service, given that it is not a user account, is not able to be mined by an attacker. I’m of the opinion that all management pack authoring needs to move in this type of direction, and if I were calling the shots, solutions such as Kevin’s solution for SQL would be integrated into every one of our MPs. Unfortunately, as of this writing, this is not the case.

Where run as accounts are required, an organization needs to put some intelligent controls in place.

  • Ensure this account can only log on to the machines that SCOM distributes it to.
  • NEVER use the less secure distribution option. I personally would argue that this feature should be removed from the product, as it makes it way too easy to expose yourself to massive amounts of risk.
  • Ensure the run-as account is not a high value account.
  • Strictly control the administration of SCOM as SCOM admins are the ones who can create and distribute them.
  • Train SCOM admins so that they understand this vulnerability.
  • Regularly audit run as account configuration and distribution.

Least privilege service accounts

This one speaks for itself, though I’ve seen plenty of organizations that assign way too many rights to a SCOM service account because it’s easy. You can find official requirements here, but as you can see, several of these accounts need local admin rights (note that’s admin rights on the management servers themselves, not everywhere… and most definitely NOT domain admins). I would further add that because of this, these accounts run in resident memory on the management servers. It would be wise to ensure they have no privileges elsewhere.

Some organizations will make the management server action account a server admin to facilitate agent deployment and upgrade. I would argue that this too is a bad practice. The account won’t sit in resident memory on agents (except when in use), but it does sit in resident memory on management servers, so by compromising a management server, you could potentially compromise this account as well, giving an attacker admin across the org.  Restricting the Management Server Action Account does have a small pain point in that you need to manually enter account credentials for agent deployment and update if you’re using the SCOM console, but to me, that’s a worthwhile trade. To be fair, managing software deployment accounts is a challenge for all organizations, though again this is where a Red Forest/Privilege Access Workstations come into play as these accounts can be restricted via IPSEC to only run from specific locations. Personally, I’m prefer to outsource agent deployments and updates to SCCM anyways. It’s not hard to change the IsManuallyInstalled flag in the SQL DB, and it allows for an automated solution to deploying agents and patching.

SCOM port considerations

Microsoft publishes SCOM’s port requirements here (see the “supported firewall scenarios” section). Note that this document is applicable for both SCOM 2016 and SCOM 2012 R2. I think most of what I have to say is common sense, so I won’t elaborate, but it’s definitely worth opening ports only as described in this document.

This concludes the potential security risks to consider when deploying SCOM. The next piece will cover how to architect an Operations Manager environment using Microsoft’s Tiered account structure.

Summary

  • Securing Privilege Access (AD Security) paper.
  • Carefully Manage RunAs Accounts
    • Avoid less secure distribution
    • Avoid using powerful accounts
    • Use IPSec to restrict RunAs accounts to only systems that need them.
  • Restrict privileges of SCOM accounts.
  • Turn on Agent Proxy only as needed

Part 3 can be found here.

Securing SCOM in a Privilege Tiered Access Model–Part 1

I’ve had a few discussions with some people internally on this subject. One thing that has been consistent in these conversations is that we (Microsoft) don’t have much in the way of good guidance on securing SCOM, and this really needs to be addressed. Since I’ve written quite a bit on Cyber Security and SCOM, have released a security monitoring solution for SCOM, and am now officially a Cyber Security Consultant at Microsoft, I figured I’d take a stab at this. It’s worth noting that this has been tossed around internally, though I wouldn’t be surprised if I have to update it at some point in the not so distant future as this is unofficial guidance.

Let’s start by giving a quick explanation of the tiered access model. For more detail, I’d highly recommend reading the Securing Privileged Access Reference Material that Microsoft has published, as it has much more detail. In summary, Microsoft recommends breaking accounts into various tiers. Microsoft recommend isolating identities into various Tiers.  Identities include user accounts, computer account, applications, etc.  Tier 0 represents those identities that can give you full access to the environment. These credentials should NEVER be used on Tier 1 or Tier 2 systems. They should only be used on Tier 0 systems (i.e. domain controllers). Tier 1 represents the server tier where your business and application data resides. Even in this scenario, it’s recommended to move away from that global server admin account which if compromised is almost as bad as an attacker getting that DA account. Compromising a Tier 0 account is certainly easier for an attacker, but if they get enough of Tier 1, they still have your data. Servers and accounts managing servers need to be isolated with various restrictions in place to prevent lateral movement and collection of these credentials. Microsoft does provide an engagement to help against this called SLAM, Securing (against) Lateral Account Movement. I highly recommend that as a way to start locking down your organization. Tier 1 credentials should never be used in Tier 0 or Tier 2. Tier 2 is the desktop tier with connectivity to the internet for browsing, email, and general application use. This is the assumed breach area, as no matter how hard you try, some one will click on something they shouldn’t and eventually compromise a desktop. Tier 1 and 0 creds should never be used on a Tier 2 device. This includes common things such as RDP to a Tier 1 server. RDP Restricted Admin settings can help in some ways, namely keeping a Tier 1 cred off of the Tier 2 system, but the recommendation for managing your environment would be to use separate Privilege Access Workstations (PAW) in some sort of Red Forest environment, which we call ESAE.

System Center services have high privilege in the environment to many systems including Tier 0 which makes them a prime target for attackers to do bad things in your environment.  As John Lambert mention as part of his “How InfoSec Security Controls Create Vulnerability” article, the method that Information Security systems are implemented without visualizing the security dependency graph is where individual risk management decisions fail to create a defensible system. As such, I’d highly recommend isolation of the system center stack.  This is an application that could potentially hold the credentials to powerful accounts, making it a high value target to attackers.

Let’s start with the architecture. SCOM uses an agent to run workflows and return data to the management server for alerting, collection, etc. In and of itself, this is a fairly innocuous task. Communication between the management server and the agents is fairly benign. The management servers will send configuration information the agents (i.e. which management packs to download) and the agents send the results of those MPs back to the management server. There are a few risks to this, with the biggest being run as accounts. We’ll talk more about them in the next part, but I’ll simply note here that poor distribution of run as accounts can expose your organization to credential theft and reuse (aka pass the hash).  For now though, I want to highlight two other areas of concern.

Agent Action Account should always be the local system account

This should not be confused with the Management Server Action account. This account is the default account for things like agent updates, agent deployment (and I would argue that it’s probably best not to use this account for those purposes, since it runs in resident memory on the management servers), and running various workflows on the management servers. The agent action account is the account that an agent uses to execute its workflows. By default, this is the local system account, as that is what the Microsoft Monitoring Agent runs under. That said, it is configurable and customers can have the monitoring agent run under service account credentials. This is a BAD IDEA. As mentioned in the Administrative Tools and Logon Types of the Securing Privileged Access Reference Material article, service accounts leave credentials behind on every system that the service runs.  By compromising one system where this service (that uses the service account) runs provides an attacker the capability to reuse those credentials to access all other systems that are allowed to use this account. If you must use a service account, then this account needs to have access restricted only to the machine that needs it. If that account has rights across the domain, then you’ve opened your environment up to being compromised quickly. I’ve written about this as well, and you can find that piece here.

Secure access to who can import/change MPs and from where they can import them.

While it’s not as obvious from the SCOM console, SCOM has extensive libraries to run PowerShell, command line, and VBS scripts, and to be fair, much of this is on the author of those management packs to follow best practices, and an attacker has no such obligation. This means that someone could write a management pack that could potentially deploy malicious software, create a back door, or even use SCOM as a vehicle to collect key information about an environment.  I could, for instance, write a management pack to use a PowerShell probe or task that connected to a remote share and install some malware on a system. I could potentially use it to lower the security posture of a system.  SCOM doesn’t have much in the way of auditing either, meaning that we cannot trace back who would have done something like this. Your only clues as to whether or not this could be going on is if you were regularly auditing the installed management packs as well as their content (and I find that this not done often).  It would be likely in this scenario that you would see a lot of the yellow SCOM alerts (Workflow failed to run, Workflow failed to initialize, OpsManager failed to start a process, etc.), but in my experience, very few organizations spend much time looking at these alerts.

Out of the box, I’d add, SCOM is very vulnerable as the BUILTIN\Administrators group is by default a SCOM administrator. This should be removed and replaced with an active directory group that is limited only to your SCOM engineers and appropriate SCOM service accounts (more on that in the next post).  You also need to control where this type of access can be performed. This fits into Microsoft’s PAW and Red Forest concepts, as administration of SCOM should not be allowed from your Tier 2 environment. Tier 2 is an assumed breach environment as it can be compromised easily. If your SCOM admin, for instance, has the SCOM console installed on his/her desktop and does a  “run as” to use it, their SCOM administrative credential is now sitting in the LSA on their local desktop, which means an attacker can steal those credentials. If those credentials have more access, the attacker just got your tier 1 environment. If they are just SCOM admins, the attacker could upload a malicious management pack to SCOM.  This also means your SCOM admin could feasibly be the victim of a targeted phishing attack as this could be a very quick way to compromise an environment.

Because of this, SCOM administration really needs to be occurring through a Red Forest. A Red Forest, for the record, is a non-trusted domain. It’s hardened and it does not have internet access, email, etc. You would use IPSEC and firewalls to restrict administration of your environment through only your Red Forest. Your SCOM admin should never be administering SCOM from an internet facing machine joined to your domain. They should be doing this from a red forest. If they do their administration on the management server directly, they should only be allowed to RDP to the management server from the Red Forest. This makes it very difficult for an attacker to steal your credentials.

That said, setting up a Red Forest will certainly take a lot of time. In the short term, consider enabling RDP Restricted Admin mode (instructions are here). This will lower the attack surface for lateral movement as RDP credentials will not be stored in the local machine’s LSA. Authentication will happen on the RDP target only. This isn’t as secure as a Red Forest, but it is an easy short term fix that can reduce your attack surface.

This covers the first piece in this series. In the next piece, I will cover more about least privileges, run as accounts, and other things that can be done to protect your Operations Manager environment.

Summary

  • Securing Privilege Access (AD Security) paper.
  • Agent Access Account should be the Local System Account
  • SCOM administrators should be restricted. The location of where SCOM administrators can administer SCOM should also be restricted.

Part 2 is here.

Part 3 is here.

Configuring SCOM to Monitor Dell Storage Solutions

I was asked by a customer recently to configure SCOM to monitor Dell EMC SANs. The request seemed easy enough, until I got to doing it and realized that the documentation is, well, less than stellar. As such, this will be a quick post as to how we managed to get this working. I’m not 100% sure that every step listed in here is needed, but this is what we did to accomplish that task.

First, the instructions. The best documentation we were able to find was not off of Dell’s site, sadly. It was here on systemcenter.wiki. Even that, however, appears to be a bit out of date. Dell’s instructions said the following:

  1. Install the ESI Service
  2. Configure the ESI service connection.
  3. Publish the connection.
  4. Import the Management Packs.

The “how” was missing from their documentation, and their online video that was supposed to show us how only showed us what it looked like once done. I’m sure there’s some better documentation out there, or at least I’d like to hope there is, but I was unable to find it and as such I’m publishing this article.

Installing the ESI Service

First we needed to find that. This too wasn’t easy. There was no ESI service download from Dell’s site. We did find it eventually, as it’s included in the ESI PowerShellSetup files that they made available for download. That was mentioned in the description of the download. There were several versions available, but for what it’s worth, we did not get the latest version to work and had to settle on 5.1.0.3. This may have been due to a few reasons. I suspect in retrospect that this is because the ESI service may be dependent on the Unisphere CLI component; however, the installer did not call this out as a dependency and let us proceed without it. Also, I will note that one of the big problems we had is that we installed the PowerShell patch that they provided. This did not uninstall, making rollbacks impossible. I would advise against this without a snapshot in advance and understanding of what this patch is doing and if you need it.

On to the steps.

  • Download the files. As mentioned before, we only got 5.1.0.3 working. These are the files as they are named on the Dell portal. We needed the ESI.5.1.0.3PowerShell.Setup, ESI.5.1.0.3.GUI.Setup, UnisphereCLI-Win32-x86-en_US-3.0.0.1.16-1, and the ESI.SCOM.ManagementPacks.5.1.0.3.Setup. 
    image

    I would note that we never got 5.2.0.9 working. There’s no GUI for it, which we needed, but there could have been other factors there, so I don’t want to say it won’t work. I will state that the GUI MMC kept crashing every time we launched it in this configuration.

  • We installed all of these from an administrative command prompt to get around UAC issues. The first piece we installed was the UnisphereCLI file. This is supposedly only needed for certain adaptors, but since we were using a Unity adaptor, this was a requirement. This may not be necessary depending on our SAN’s adaptor, though I’m not convinced that this isn’t a component for ESI as well.
  • Next, we installed the 5.1.0.3 PowerShell setup and then the GUI setup. I’ll note one other issue we had (though it was with 5.2.0.9), was that the AD publishing piece kept crashing on a Server 2016 build during install, as such, we decided to use the options to publish locally for this piece. Everything else was default.
  • Last, we ran the management packs. This is a typical MP extraction. We will need to import them later.

Configuring the ESI Service Connection

Once everything is installed and working, you should be able to successfully launch the EMC Storage Integrator MMC icon created during the install. What we need to do here is tell the ESI service to talk to the SAN devices that you want to monitor. This part is fairly straight forward, though you will need to have some sort of storage credentials to talk to this, so someone from your storage team may need to be involved.

  • Launch the EMC Storage Integrator.
  • Right click on “Storage Systems” and choose “Add Storage System”.
    image
  • Choose your adaptor type, and fill out the appropriate information (note that each type may have a different set of info, this screenshot is for Unity only). Definitely test the connection before clicking add.
    image
  • Repeat until all connections are added.

Publish the Connections to ESI

At this point, we need to publish this information. We chose the local host during setup, so effectively that means this info is stored locally. Active directory is an option, though as I mentioned earlier, this kept crashing during install. This could have been our lab, a bug in their software, or who knows. We didn’t spend a lot of time figuring that out as a local connection was acceptable. This too is done from the EMC Storage Integrator.

  • Right click on the EMC Storage Integrator MMC icon (from within the MMC) and choose the option to Publish Connection.image
  • This box appears:
    image
  • You’ll need to change the Publish to Target option to the ESI Service. Strangely enough, this would crash if we added any value in the “Target Host field,” so we left it blank. We kept the default installs during the various setup routines, so if you configured custom ports, no SSL, etc., you’ll need to change that accordingly.
  • Next step, is to click “Refresh”. That will display your targets in the left pane. They did not automatically appear here. Select the appropriate targets and click “Add”.
  • After that’s done, the Publish button will no longer be gray and you can click “Publish”.

Import the Management Packs

At this point, we are in the home stretch… or so we thought at least. We went ahead and imported the 5.1.0.3 management packs that we extracted earlier. The ESI service discovered as expected. Nothing else did. So now more digging. The problem is that the next discovery is disabled by default. That isn’t bad practice per se, but it would be nice if there had been any documentation on this, as you’ll need to do this in order to actually get good data into SCOM.

  • From the Authoring workspace, expand “Management Pack Objects,” and select “Object Discoveries”.
  • Do a find for “EMC SI Service Discovery”.
    image
  • You can see that this is disabled by default. I’m not exactly sure why this is targeted at Windows Computer and not the ESI Service that was already discovered, but I didn’t have access to the people that wrote this. Go ahead and perform an override. You can likely override just for the object that hosts the service, though we did all objects of the class. It’s worth noting that you do need to specify the machine name in the override settings (or at least we did).
  • This appears:
    image
  • It’s worth noting that there are lots of options here that can be overridden. We did the Enabled and ESI Service Host options, though if you have customizations, proxy, ports, accounts, etc., that should be included here. I would note that I don’t think this is being configured with SCOM run as accounts, so I’m not certain how secure it would be to put a username and password in this field.

That did it. A short time later, all of our storage pools were showing up in SCOM and monitoring was working.

In Place Upgrading the SSRS for SCOM

I ran into an odd issue today, doing an in-place upgrade of SQL 2012 SP3 to SQL 2016 in prep for a SCOM upgrade that was worth noting. My customer had separate instances for the DB/DW, and that upgrade was fine. However, when doing an inplace of SSRS, we got the following failure during the SQL upgrade:

image

There isn’t much out there on the matter, other than noting that the SQL installer doesn’t support custom extensions. The problem, as I see it is that when SCOM reporting is installed over SSRS, it does a whole lot of crap to the RSReportServer.config file. In this case, the following lines appear to be the culprit.

image

To fix, we did the following.

1) Locate and backup the RSReportServer.config file

2) Open the original copy with notepad and do a search for <Authentication>. The second <Authentication> tag will take you to place where these extensions are located.

3) Remove the extensions (the entire contents from opening to closing tags). You can leave the open and close tags for <Security> and <Authentication> (note we recycled the report server service as well, but I’m not sure this is necessary). We simply pasted these items into a separate notepad document. We then saved the RSReportServer.config file. This breaks SCOM reporting at this point.

4) Perform an in-place upgrade of the SQL server.

5) Re-add the extensions.  It is worth noting that the RSReportServer.config file moves to another directory during a SQL in place upgrade. You’ll need to find that new file and add the custom extensions back in there.

6) Last, but not least, there are some SCOM related files that get left behind in the old report server bin directory. Copy those files into the new SSRS location. Do not replace anything that’s already there, just copy over the files that are left behind. After this, recycle the SSRS server. This should allow reports to load. SCOM reporting was functional again at this point.

Update:  Management group information is stored in a separate SSRS configuration file. Reporting will continue to work, but scheduled reports from SCOM could be affected. This file is

<Drive>:Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer\bin\ReportingServicesService.exe.config. You’ll need to update the new SSRS file with the information from the old SSRS file.

image

Security Monitoring: A Possible New Way to Detect Privilege Escalation

The problem that most defense mechanisms have in detecting the adversary is that they tend to be focused on detecting the tools far more so than detecting the results. There are reasons for this, the most obvious being that it is very easy for there to be false positives within the results, and as such, we don’t want our AV products to become denial of service tools. As it is, many of these products have caused extensive downtime in organizations due to ‘detecting’ something that wasn’t bad.  Unfortunately, that makes life for attackers fairly easy. It’s not hard, for instance, to recompile a publicly available attack tool so that it avoids AV detection. If you don’t believe me, read the “Detecting Mimikatz” section of this article.

This is the premise behind the Security Monitoring Management Pack in SCOM. Simply alerting for the sake of alerting generates a lot of noise, but if it was possible to detect something unique to an attacker, then we have the ability to respond in real time (assuming of course, the organization responds to alerts with something more than an email that no one reads)…

This is where some of the new audit capabilities of Server 2016 and Windows 10 come in to play. It’s worth noting that the method I’ll describe below is not replicated on my Server 2008 system in the same domain because this is a new feature. However, it is potentially a powerful one as it exploits the ability to audit a basic function needed for credential theft: namely debug privileges. Even for standard administrators, debug privileges are not needed, and as such they are not assigned to an administrative token. For credential theft, however, this is required (side note, this is something that is occasionally needed outside of credential theft, but elevating this permission outside of WMI is not something that should happen very often, if at all, in a production environment). That said, because administrators are God to the computer, any administrator can effectively elevate their token to grant themselves debug rights when needed. This is why tools such as WCE or Mimikatz require administrative rights. Their users effectively need to assign themselves SeDebugPrivilege in order to mine the LSA for your credentials. New features, such as Credential Guard, make this much more difficult to do, but one should not rest on these new feature sets. There’s no such thing as a 100% secure environment.

Enter the new method:

First, this will require a GPO. The “Audit Token Right Adjusted” audit event will need to be set.  Documentation for this setting can be found here. This is part of the Advanced Audit Policy Configuration under “Detailed Tracking”.

image

This will start generating 4703 events. It’s worth noting that 4703 will be a fairly common event once enabled. They are generated at logon and at various uses on a system.  Security features such as user account control practically require it. So simply searching for a 4703 is a bad idea. However, this does allow us to look for events unique to the bad guy. In the screen shot below, I used Mimikatz to elevate my token from an administrator to debug rights. This is accomplished directly via Mimikatz command line.

image

As noted before, the process name can potentially change, but we can clearly see when a token is escalated to the privilege necessary to attempt to mine the LSA. For those following along at home, this can be accomplished via the SCOM console with a simple rule looking for Event ID 4703 and Parameter 11 = SeDebugPrivilege.  That’s it.

Update: This is off in the next security monitoring release, though it is available to be turned on. Currently, there seems to be noise from SCOM itself as related to SCOM 2016 as well as the new Windows Server MPs.