Securing SCOM in a Privilege Tiered Access Model–Part 2

Previously, I discussed basic security posture and what is needed to secure a SCOM installation. The post can be found here. In summary, we discussed risks associated with malicious management packs and the use of a service account for agent action instead of the local system. This discussion will focus a bit deeper on account management.

Carefully plan Run-As account distribution

In my opinion, poor run as account distribution practices poses the greatest risk to your environment, as a poorly distributed account could potentially give an attacker the keys to your environment. The first thing worth noting about run as accounts is that they need to be able to logon locally. This effectively means that the account’s credentials are sitting in memory on any server that it was distributed to. I demonstrated this particular risk in this piece, and I recommend reading it before planning a SCOM installation.  Server 2016 has mitigated many risks associated with pass the hash, but older operating systems do not have the same mitigations in place, and as such, they are exposed. Keep in mind that it only takes one compromised server to compromise a tier. If you have a super account running on Server 2008, I can collect that hash and still use it to access a more secure 2016 system. The OS mitigations in place will prevent me from collecting additional hashes off that system, but once I’m on it, I can still do whatever I want with the system.

In the tiered structure, you don’t want Tier 0 accounts being used on Tier 1. In short, this means no Domain Admins logging on to anything that is not a domain controller. That’s simple enough. The AD MP doesn’t need a DA run as account anyways, so the only issue at hand is finding a method to patch/upgrade the agent on a domain controllers.

Tier 1, however is a bit more complicated. This is your server tier. Many organizations (and I’ve been guilty of this in the past as well) have a handful of super accounts that are local admins on every server in the environment. If any of those accounts are used as a run as account and distributed anywhere, this means that this account could potentially be harvested. All an attacker needs is local admin rights on one server that has this type of account running and your entire tier 1 environment is compromised. This is, as far as I’m concerned, just as bad as compromising tier 0. The attacker effectively has all of your data and access to any server in your environment. Even without domain admin rights, they will be able go about their business. In the tiered model, there should be very few of these types of accounts, and their use should be restricted from the management network (aka Red Forest). Other accounts in Tier 1 need to be restricted to only the machines that they need to run on.

As such, my general opinion is to stay as far away from using run as accounts as possible. For most of our management packs, this is not an issue. However, some MPs (SQL and SharePoint for instance), need them, and SharePoint does not even have an option for least privilege.  The first thing I’d recommend is using NT Service SIDS in their place. I know this works for SQL, as Kevin Holman has a great article on how to do this (though I highly recommend using the least privilege configuration and not SA rights). The Health Service SID effectively gives the local system’s health service the minimum permissions needed to monitor a SQL environment. The health service, given that it is not a user account, is not able to be mined by an attacker. I’m of the opinion that all management pack authoring needs to move in this type of direction, and if I were calling the shots, solutions such as Kevin’s solution for SQL would be integrated into every one of our MPs. Unfortunately, as of this writing, this is not the case.

Where run as accounts are required, an organization needs to put some intelligent controls in place.

  • Ensure this account can only log on to the machines that SCOM distributes it to.
  • NEVER use the less secure distribution option. I personally would argue that this feature should be removed from the product, as it makes it way too easy to expose yourself to massive amounts of risk.
  • Ensure the run-as account is not a high value account.
  • Strictly control the administration of SCOM as SCOM admins are the ones who can create and distribute them.
  • Train SCOM admins so that they understand this vulnerability.
  • Regularly audit run as account configuration and distribution.

Least privilege service accounts

This one speaks for itself, though I’ve seen plenty of organizations that assign way too many rights to a SCOM service account because it’s easy. You can find official requirements here, but as you can see, several of these accounts need local admin rights (note that’s admin rights on the management servers themselves, not everywhere… and most definitely NOT domain admins). I would further add that because of this, these accounts run in resident memory on the management servers. It would be wise to ensure they have no privileges elsewhere.

Some organizations will make the management server action account a server admin to facilitate agent deployment and upgrade. I would argue that this too is a bad practice. The account won’t sit in resident memory on agents (except when in use), but it does sit in resident memory on management servers, so by compromising a management server, you could potentially compromise this account as well, giving an attacker admin across the org.  Restricting the Management Server Action Account does have a small pain point in that you need to manually enter account credentials for agent deployment and update if you’re using the SCOM console, but to me, that’s a worthwhile trade. To be fair, managing software deployment accounts is a challenge for all organizations, though again this is where a Red Forest/Privilege Access Workstations come into play as these accounts can be restricted via IPSEC to only run from specific locations. Personally, I’m prefer to outsource agent deployments and updates to SCCM anyways. It’s not hard to change the IsManuallyInstalled flag in the SQL DB, and it allows for an automated solution to deploying agents and patching.

SCOM port considerations

Microsoft publishes SCOM’s port requirements here (see the “supported firewall scenarios” section). Note that this document is applicable for both SCOM 2016 and SCOM 2012 R2. I think most of what I have to say is common sense, so I won’t elaborate, but it’s definitely worth opening ports only as described in this document.

This concludes the potential security risks to consider when deploying SCOM. The next piece will cover how to architect an Operations Manager environment using Microsoft’s Tiered account structure.


  • Securing Privilege Access (AD Security) paper.
  • Carefully Manage RunAs Accounts
    • Avoid less secure distribution
    • Avoid using powerful accounts
    • Use IPSec to restrict RunAs accounts to only systems that need them.
  • Restrict privileges of SCOM accounts.
  • Turn on Agent Proxy only as needed

Part 3 can be found here.

Securing SCOM in a Privilege Tiered Access Model–Part 1

I’ve had a few discussions with some people internally on this subject. One thing that has been consistent in these conversations is that we (Microsoft) don’t have much in the way of good guidance on securing SCOM, and this really needs to be addressed. Since I’ve written quite a bit on Cyber Security and SCOM, have released a security monitoring solution for SCOM, and am now officially a Cyber Security Consultant at Microsoft, I figured I’d take a stab at this. It’s worth noting that this has been tossed around internally, though I wouldn’t be surprised if I have to update it at some point in the not so distant future as this is unofficial guidance.

Let’s start by giving a quick explanation of the tiered access model. For more detail, I’d highly recommend reading the Securing Privileged Access Reference Material that Microsoft has published, as it has much more detail. In summary, Microsoft recommends breaking accounts into various tiers. Microsoft recommend isolating identities into various Tiers.  Identities include user accounts, computer account, applications, etc.  Tier 0 represents those identities that can give you full access to the environment. These credentials should NEVER be used on Tier 1 or Tier 2 systems. They should only be used on Tier 0 systems (i.e. domain controllers). Tier 1 represents the server tier where your business and application data resides. Even in this scenario, it’s recommended to move away from that global server admin account which if compromised is almost as bad as an attacker getting that DA account. Compromising a Tier 0 account is certainly easier for an attacker, but if they get enough of Tier 1, they still have your data. Servers and accounts managing servers need to be isolated with various restrictions in place to prevent lateral movement and collection of these credentials. Microsoft does provide an engagement to help against this called SLAM, Securing (against) Lateral Account Movement. I highly recommend that as a way to start locking down your organization. Tier 1 credentials should never be used in Tier 0 or Tier 2. Tier 2 is the desktop tier with connectivity to the internet for browsing, email, and general application use. This is the assumed breach area, as no matter how hard you try, some one will click on something they shouldn’t and eventually compromise a desktop. Tier 1 and 0 creds should never be used on a Tier 2 device. This includes common things such as RDP to a Tier 1 server. RDP Restricted Admin settings can help in some ways, namely keeping a Tier 1 cred off of the Tier 2 system, but the recommendation for managing your environment would be to use separate Privilege Access Workstations (PAW) in some sort of Red Forest environment, which we call ESAE.

System Center services have high privilege in the environment to many systems including Tier 0 which makes them a prime target for attackers to do bad things in your environment.  As John Lambert mention as part of his “How InfoSec Security Controls Create Vulnerability” article, the method that Information Security systems are implemented without visualizing the security dependency graph is where individual risk management decisions fail to create a defensible system. As such, I’d highly recommend isolation of the system center stack.  This is an application that could potentially hold the credentials to powerful accounts, making it a high value target to attackers.

Let’s start with the architecture. SCOM uses an agent to run workflows and return data to the management server for alerting, collection, etc. In and of itself, this is a fairly innocuous task. Communication between the management server and the agents is fairly benign. The management servers will send configuration information the agents (i.e. which management packs to download) and the agents send the results of those MPs back to the management server. There are a few risks to this, with the biggest being run as accounts. We’ll talk more about them in the next part, but I’ll simply note here that poor distribution of run as accounts can expose your organization to credential theft and reuse (aka pass the hash).  For now though, I want to highlight two other areas of concern.

Agent Action Account should always be the local system account

This should not be confused with the Management Server Action account. This account is the default account for things like agent updates, agent deployment (and I would argue that it’s probably best not to use this account for those purposes, since it runs in resident memory on the management servers), and running various workflows on the management servers. The agent action account is the account that an agent uses to execute its workflows. By default, this is the local system account, as that is what the Microsoft Monitoring Agent runs under. That said, it is configurable and customers can have the monitoring agent run under service account credentials. This is a BAD IDEA. As mentioned in the Administrative Tools and Logon Types of the Securing Privileged Access Reference Material article, service accounts leave credentials behind on every system that the service runs.  By compromising one system where this service (that uses the service account) runs provides an attacker the capability to reuse those credentials to access all other systems that are allowed to use this account. If you must use a service account, then this account needs to have access restricted only to the machine that needs it. If that account has rights across the domain, then you’ve opened your environment up to being compromised quickly. I’ve written about this as well, and you can find that piece here.

Secure access to who can import/change MPs and from where they can import them.

While it’s not as obvious from the SCOM console, SCOM has extensive libraries to run PowerShell, command line, and VBS scripts, and to be fair, much of this is on the author of those management packs to follow best practices, and an attacker has no such obligation. This means that someone could write a management pack that could potentially deploy malicious software, create a back door, or even use SCOM as a vehicle to collect key information about an environment.  I could, for instance, write a management pack to use a PowerShell probe or task that connected to a remote share and install some malware on a system. I could potentially use it to lower the security posture of a system.  SCOM doesn’t have much in the way of auditing either, meaning that we cannot trace back who would have done something like this. Your only clues as to whether or not this could be going on is if you were regularly auditing the installed management packs as well as their content (and I find that this not done often).  It would be likely in this scenario that you would see a lot of the yellow SCOM alerts (Workflow failed to run, Workflow failed to initialize, OpsManager failed to start a process, etc.), but in my experience, very few organizations spend much time looking at these alerts.

Out of the box, I’d add, SCOM is very vulnerable as the BUILTIN\Administrators group is by default a SCOM administrator. This should be removed and replaced with an active directory group that is limited only to your SCOM engineers and appropriate SCOM service accounts (more on that in the next post).  You also need to control where this type of access can be performed. This fits into Microsoft’s PAW and Red Forest concepts, as administration of SCOM should not be allowed from your Tier 2 environment. Tier 2 is an assumed breach environment as it can be compromised easily. If your SCOM admin, for instance, has the SCOM console installed on his/her desktop and does a  “run as” to use it, their SCOM administrative credential is now sitting in the LSA on their local desktop, which means an attacker can steal those credentials. If those credentials have more access, the attacker just got your tier 1 environment. If they are just SCOM admins, the attacker could upload a malicious management pack to SCOM.  This also means your SCOM admin could feasibly be the victim of a targeted phishing attack as this could be a very quick way to compromise an environment.

Because of this, SCOM administration really needs to be occurring through a Red Forest. A Red Forest, for the record, is a non-trusted domain. It’s hardened and it does not have internet access, email, etc. You would use IPSEC and firewalls to restrict administration of your environment through only your Red Forest. Your SCOM admin should never be administering SCOM from an internet facing machine joined to your domain. They should be doing this from a red forest. If they do their administration on the management server directly, they should only be allowed to RDP to the management server from the Red Forest. This makes it very difficult for an attacker to steal your credentials.

That said, setting up a Red Forest will certainly take a lot of time. In the short term, consider enabling RDP Restricted Admin mode (instructions are here). This will lower the attack surface for lateral movement as RDP credentials will not be stored in the local machine’s LSA. Authentication will happen on the RDP target only. This isn’t as secure as a Red Forest, but it is an easy short term fix that can reduce your attack surface.

This covers the first piece in this series. In the next piece, I will cover more about least privileges, run as accounts, and other things that can be done to protect your Operations Manager environment.


  • Securing Privilege Access (AD Security) paper.
  • Agent Access Account should be the Local System Account
  • SCOM administrators should be restricted. The location of where SCOM administrators can administer SCOM should also be restricted.

Part 2 is here.

Part 3 is here.

Using SCOM to Capture Suspicious Process Creation Events

I recently had the privilege of chatting with Greg Cottingham on the Azure Security Center Analyst Team about process creation events and how to use them to detect anomalous events that need to be investigated.  It was a very interesting discussion and I was given a few real world examples of how the bad guys can move around in your networks.

To start with the basics, process creation is a fairly routine thing in the Windows world. If I execute a program, it can generate one or many processes which will remain open until the program completes.  This can be audited, as a process creation will generate event 4688 in the security event log.  With that said, it is disabled by default.  Server 2012 R2 added an additional feature, in that with an additional GPO setting, you can audit the command line that was executed.  So in short, you need to turn some things on to capture this functionality.

Before you run and do that though, you probably should review your environment. This will generate a lot of security events. If you have tools such as ArcSight, Splunk, OMS, or SCOM collecting these events, you’d be wise to do this incrementally to ensure that you aren’t overloading these tools, and I’d add that if you don’t have a plan in place to review and respond to what you find, then you should think about that before you start turning on auditing that won’t be looked at.  The other problem is that by turning on command line auditing, anyone that can read security events could read the contents, and potentially read something sensitive. So please, think this through carefully. A full write up on TechNet can be found here.

Turning this on is fairly easy.  A simple GPO will suffice.  Instructions are here.  In my lab, this was quite easy:


Enabling command line:


Within minutes of doing this, 4688 events start showing up in various event logs.

Next up, we need to determine what to look for.  These events contain a couple of useful parameter.  The full list can be seen in this screenshot:


Parameter 6 (New Process Name) and Parameter 9 (Command Line) will generally contain the items worth triggering on, but we can use SCOM to report or filter out on other things such as the user name (parameter 2).  Again, we don’t want to simply alert on 4688 events, as that will generate significant amounts of noise, but we can target this to specific events that should be investigated every time.  Here are some examples:

Applocker Bypass – an intruder uses javascript  or regsvr32 to work around applocker rules

image image

Command line FTP – an attacker uses command line to download an execute a payload.


Manipulating the PowerShell Window position – an attacker executes a powershell command, but manipulates the window position as such so that the logged on user cannot see it.


Command Execution in Folders with no executables – this folder list could be long when you think about it, but these are common destinations.


Manipulation of run key – an attacker adds a line to start a process at startup.  I’d note this is an example where you may need to add some “does not contain” options so as to avoid alerting for legitimate events.


Happy hunting.

I’m working on a management pack that contains these monitors along with additional security rules along with other security related items that SCOM can provide. My only test environment is my lab, which is hardly a production grade environment. I do welcome feedback. While I can only support this to the best of my ability, I can provide it if this is something you are interested in trying it out. My main goal is to keep the noise down to a minimum so that each alert is actionable. While that is not always easy to do, trying this out in other environments will go along way to getting it to that point.  If this is something you are interested in testing, please hit me up on linked in or have your TAM reach out to me internally.

Using SCOM to Detect Scheduled Task Creation

One well known thing that attackers like to do is to create scheduled tasks to periodically execute their payloads.  Detecting Scheduled task creation is not terribly difficult.  While it does not use the standard logs, there is an operational task scheduler log available that generates an event ID 106 whenever a scheduled task is created.  Information on said event can be found here.  Creating a rule to do this is very straight forward:



The downside to this is that it is not necessarily a quiet proposition.  For those that know me, noise generated by management packs is something I’m not a fan of.  Alerts should be actionable, and generating alerts that are ultimately ignored due to constant false positives will not do anyone any good.  As such, this brings us to a few problems as many organizations have legitimate uses for scheduled tasks. For the person responsible for security monitoring, they need a way of knowing which tasks are legitimate and which ones are not.  One easy way would be to configure standard tasks before the SCOM agent is installed, or use maintenance mode on the server while it is being configured prior to maintenance.  Good process can certainly get around those alerts, but there’s another problem: some applications also create scheduled tasks.

In my own lab, this is noticeable as an anti-malware scan runs periodically, and when it is set, it is set by a scheduled task:


If this is common thing in a lab with a half-dozen servers in it, then it could likely be a noisy thing in your environment.  But it is correctable.  Enter Parameterization.  The scheduled task log has two parameters:  The logging user and the task name. The task name is parameter 1, so if you have a task that is periodically being created by an application, then the task name should be the same. I’d hesitate to filter by the user name in this case as a user name such as a machine name could be generated by malicious code depending on the context of whomever generated it.

Crafting and tuning that rule is simple.


You’ll likely have some noise to start, but this is hardly something that cannot be corrected over time. Just keep adding lines for “Parameter 1” and “Does not contain” with a value that is unique to your scheduled task.

Using SCOM to Detect Golden Tickets


For the three people that religiously read my blog, you know by now that I’ve been writing quite a bit on using SCOM to detect some of the anomalous events that are specific to an intruder in your environment.  This is the first, of what will hopefully be a few rules needed to detect the existence of the back doors that attackers leave behind so that they can re-enter your environment at will.

As a quick matter of definition, let’s boil this down to the basics.

  • In order to obtain a golden ticket, your environment had to be already compromised:  An attacker gets them by backing up the master Kerberos ticket’s key.  In a windows environment, this key only exists on a domain controller, which means that if the attacker acquired it, they had already compromised your DC.
  • Once the attacker has obtained this key, they can re-enter your environment at will, even if you’ve already evicted them: This is the big danger to golden tickets in that they don’t need to steal credentials of a domain admin again in order to re-enter the environment. They simply create themselves a ticket under any user context and give it domain admin rights. They can re-enter the environment and steal your AD database all over again or grab whatever else that they intended to come back and get.  It’s a forged krgtgt ticket, and as such, they don’t even need an account in your environment.
  • They have much longer expiration periods than a standard ticket:  The default expiration of a ticket is 10 hours in active directory. This is configurable, so it may be different in every environment, but in order for it to work, the attacker wants to have a long period of time in which they can re-enter an environment. When mimikatz, for instance, is used to generate a golden ticket, the default expiration of this ticket is 10 years.
  • They rely on having a valid Kerberos TGT key: This is the kicker to protecting yourself from them, but as long as the key used to sign any forged ticket is valid, the attacker can still re-enter your environment.
  • The krgtgt account’s password does not change automatically: The password itself is known only to DCs, and if you manually change it, the password you specify won’t be the actual password to the key, but by default this password does not reset. If you’re at a 2003 domain functional level and go to a 2008 domain functional level, this will do a one time reset of the password as a part of that process. Beyond that, however, this password does not change. Truthfully, this should be changed periodically, but most organizations do not do it. I’d note that the new AD MP for SCOM (if you configure client monitoring) will generate alerts on the krbtgt password if it has not been reset recently.  Microsoft does have instructions online, and it has PowerShell scripts which make it easier.
  • Standard eviction can often leave you exposed to this back door:  Simply removing the attacker’s malware does not protect you.  This will need to be done in conjunction with an AD recovery. You will need to both recover the AD environment (i.e. change all the passwords), remove the malware, and reset the krbtgt password… twice.  The reason for this is that Active Directory holds a copy of two keys:  the current krbtgt key as well as the previous key.  Both the current and previous keys need to be scrubbed from memory to invalidate any existing golden tickets.
  • This is not a vulnerability unique to windows:  The purpose of this blog is monitoring the Windows environment, but any Kerberos environment is vulnerable to this type of an attack.  It is the price one pays if their tier 0 is compromised.
  • Following the procedures below will not prevent an attacker from entering the environment:  All this will do is break existing golden tickets. If the attacker has not been completely evicted from your environment, they can and will get in.  They can then turn around and regenerate a new one.  There is definitely a cost benefit analysis that should be considered before doing a double reset of the krbtgt password as described below. You can certainly now detect a golden ticket if it is in use in your environment. However, as I’ve noted, you’ve already been compromised and have likely lost quite a bit. The golden ticket is primarily used to re-enter an environment without needing to steal credentials all over again.
  • The double reset procedure (described below) is part of an eviction process. Good practice would involve periodically resetting the krbtgt account. A double reset will likely generate noise, but periodic reset (say once every 90 days) should generate no noise unless someone is doing something that they should not.

This brings us to the detection stage.  The easiest way to detect if one is in your environment is if it is used after you’ve done the double reset of the Kerberos password. This will generate an event ID of 4769 on a domain controller with a failure code of 0x1F.  Creating a rule in SCOM for this event is very easy to do (see screenshot below), but keeping it from generating noise is the part that needs to be planned out.


The reason for this is that plenty of legitimate tickets can generate 0x1F failures if the double reset happens too quickly, and our goal is not to be concerned about the legitimate tickets, but to respond when we find a ticket that is a golden ticket. As such, you need to do a Kerberos reset very carefully if you want to detect the bad guy.  By default, our 4769 rule is not going to generate any failures in a normal operating environment.  Once we start with the password resets though, this can get very noisy if not done carefully as it will break all existing tickets until new ones are obtained.

  • Step 1:  Obtain the default expiration timeline.  This same policy will give you the clock skew information (more on this in a bit). Again, by default this is 10 hours, but you may have a GPO that sets it to a different time frame, and you NEED to know the timeframe. 
  • Step 2:  Force all passwords to be changed and do the first krbtgt password reset. DONT DO THIS UNTIL YOU UNDERSTAND THE EFFECT ON YOUR ENVIRONMENT.  Remember, that if an attacker has compromised accounts, they don’t need a golden ticket. This just lets them back in at some point after the initial compromise. If you’re dealing with an active recovery, you’re going to want to also remove internet access along with any other means that the attacker may have to enter the environment.  Otherwise, they would be able to potentially generate a new key during the waiting period. In this scenario, you may want to simply do a quick double reset, but do understand that every legitimate ticket will now fail and generate a 4769 with a 0x1F failure.  That will generate a lot of noise. The purpose of golden ticket monitoring, however, is to ensure that the bad guy doesn’t re-enter at some point down the road.  If you do a rapid reset, I would not have this rule turned on.  You can turn the rule on after this has happened and you’ve waited enough time to allow for existing tickets to regenerate.
  • Step 3:  Wait, but not long.  A couple things need to happen before this rule will work properly and not generate any noise. 
    • Active Directory needs to replicate across all Domain Controllers.
    • Once replication is complete, you need to wait for tickets signed with the old key to get new tickets issued with the new key.  You obtained that value in step 1.
    • You need to also account for time skew. This is a smaller waiting period, maybe 15 minutes or so, but this can come into play.  The default clock skew is 5 minutes, so waiting twice that will account for potential clock skew issues.
    • Please note that at this step, you are still vulnerable to a golden ticket attack.
  • Step 4:  Reset the krbtgt password again.  Remember that an attacker created a ticket that may not expire for years. They are hopefully not active in your environment when you perform this process.  Also, you need to remember that if they still have your existing credentials, they don’t need a golden ticket.  If you are not careful in following the steps, the only thing accomplished is giving yourself a false sense of security.   After this is done properly, when they attempt to use the golden ticket, the rule you created will generate an alert when a forged ticket is used.

Unfortunately, what this is telling us is that your environment was compromised at some point. If you’re doing this because of an active recovery, in that sense you already know it. However, if you’ve done this as a preemptive means of removing an attacker, you now have further investigation to do to see what was stolen previously.

Disclaimer:  I am not an active directory expert, and while I’ve done my fair share of AD administration, this is not something I’ve personally done, and I would not recommend taking it lightly. I strongly recommend something like that be done under the watchful eye of someone who knows AD at a far better level than a standard administrator.  You will want to use a trusted advisor. Microsoft does provide instructions on how to do this. Doing this properly will mitigate against most of the risks associated with it, and you will of course want to have validated that your backups work prior to doing this. Following the procedure carefully should mitigate against most of the risks associated with it, but I do recommend you make it a point of reading up on all of this and making sure you have thoroughly discussed this with Microsoft support or someone who has done this procedure before.

As you can see, this will help you detect golden tickets, but it is not exactly a non-invasive solution. Solutions such as Advanced Threat Analytics can actively detect Golden Tickets without the need to reset the Kerberos password; however, that does not mean that it would be wise to ignore this as a component to your security practice.  My next step will be to see if I can figure out how to use SCOM to detect this in a non-invasive manner.  I’m not sure if that is possible, but for now, this will work.

I’m working on a management pack that contains these monitors along with additional security rules along with other security related items that SCOM can provide. My only test environment is my lab, which is hardly a production grade environment. I do welcome feedback. While I can only support this to the best of my ability, I can provide it if this is something you are interested in trying it out. My main goal is to keep the noise down to a minimum so that each alert is actionable. While that is not always easy to do, trying this out in other environments will go along way to getting it to that point.  If this is something you are interested in testing, please hit me up on linked in or have your TAM reach out to me internally.

Using SCOM to Detect WDigest Enumeration

In a recent conversation with fellow colleague Jessica Payne, it was noted that one of the most common forms of credential theft presently involves using exposed Wdigest credentials.  Wdigest, while not commonly used today, is still enabled by default in large part because of legacy applications that use it. While this was fine back in the 2000/2003 days, it is one of many protocols that is often tied to legacy applications and is no longer a secure protocol by today’s standards.

Exposing WDigest credentials is rather easy.  Kurt Falde wrote a nice article showing how simple this attack can be; and in my own lab, I observed the same behavior. Older operating systems will show clear text passwords.  If you can expose a clear text password, there’s no need to do PtH, PtT, or really any other attack. Fixing this involves a hotfix (KB2871997) along with a registry key.  The value “UseLogonCredential” must be created and set to 0 in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest key. This effectively turns off the ability to expose WDigest credentials in clear text, but doing so can also break legacy applications. It goes without saying that before you do this, you should probably test it and ideally fix any older applications.

Now that said, this is where SCOM comes in.  Because of the nature of this type of authentication, that registry key is very important. Fortunately, registry monitoring is something that SCOM can do natively.  An attacker could potentially change the value of this key or delete it all together. As such, two SCOM monitors are needed.  The first would be to monitor for the existence of said key.  Kevin Holman detailed how to do that here.  If the registry key is not present, your server will show unhealthy.

The second would be to monitor the value of the same registry key.  Again, Kevin has shown us how to do that.  An attacker could potentially change the value of said key to allow them to expose WDigest credentials.

As for logging it, unfortunately, I wasn’t able to find any sort of bread crumb left behind when testing.  That unfortunately means that we cannot use SCOM to detect WDigest enumeration when it occurs. We can however, use a GPO to set these keys and have SCOM monitor them for changes.

I’m working on a management pack that contains these monitors along with additional security rules along with other security related items that SCOM can provide. My only test environment is my lab, which is hardly a production grade environment. I do welcome feedback. While I cannot support this management pack, I can provide it if this is something you are interested in trying it out. My main goal is to keep the noise down to a minimum so that each alert is actionable. While that is not always easy to do, trying this out in other environments will go along way to getting it to that point.  If this is something you are interested in testing, please hit me up on linked in.

Using SCOM to Detect Pass the Ticket Attacks

Last month, I wrote a two part series on using SCOM to detect pass the hash attacks. I’ve decided to take some time and focus on pass the ticket attacks. There isn’t a whole lot different between the two attack methods.  Both require administrative rights on the machine (and let’s face it, that is easy to get for an attacker), and both essentially mine the LSA for authentication information.  The main difference between the two is that PtH is stealing NTLM hashes and using them to authenticate. PtT is stealing kerberos tickets, and these can eventually be used to forge your own KDC, thus making you an admin forever. It is well known that Kerberos is more secure than NTLM; the main reason behind this is that the tickets expire every few hours.  The default is 10 hours, but that is configurable via GPO.  Passwords, on the other hand, have a much longer expiration period, and as such, attackers much prefer mining passwords either via PtH attacks or Wdigest enumeration.  That said, an attacker can still enumerate and use kerberos tickets once they have obtained access to your system.  In this experiment, my unwitting domain administrator is logged on to a machine doing work while an attacker sits silently by in another session.  From the attacker’s administrative session, I simply use the mimikatz feature to enumerate and export all tickets.  It wasn’t hard to find domain admin, see below:


Mimikatz was even kind enough to save those tickets, just in case I needed them, along with every other ticket in use by the system.  You can see below the four tickets that my administrator is currently using.


Now that the attacker has your tickets, he or she can move them to another machine and inject them into a session or simply inject the ticket into his or her current session.   That’s not very hard to do:


Last but not least, we simply access what we want to get.  In this case, I opened explorer and connected to the C$ share of a server that I didn’t have access.  I unfortunately did not find any kind of log information on the machine which I was using to transfer credentials.  I did however, find this little nugget on the machine I accessed.


The good news is that this is the same event that I discovered when performing a pass the hash attack.  My SCOM monitor for pass the hash is in place and this flagged immediately. The bad news is the credential swap rule that I configured originally was not generated. That particular rule has been very reliable in that it has only flagged when I used mimikatz to elevate credentials in my environment. It doesn’t seem to work with pass the ticket.  My next attempt was to do the same on a domain controller. Again, I had no problems, as I generated the exact same event on the DC.  Now note that I had to turn this particular rule off for DCs as this is somewhat normal behavior and was generating too much noise in my environment.  What is a bit less normal, however is the IP address.  In my lab, I don’t have segmented VLANs that you find in most production environments, as such, I cannot filter a rule by IP addresses.  This will require some unique customization to your specific environment, but in terms of generating a red flag as to a possible intruder, I think it is well worth it.


The key here is parameter 19.  Instead of defining this as the IP address does not equal the IP address of the domain controller, you could create a “does not match wildcard” condition.  The # sign represents any digit, and if you say have a server VLAN of 192.168.3.x, you could represent this as 192.168.3.###.  Now I’d note that your event logs do drop off the leading zeros, so you may have to use an or group and create additional conditions to get it working properly.  That said, with this condition, I’ve excluded the VLAN that this server resides on. If my attacker connects from a different network segment (which they usually will as they do not have control over where the victim system resides), you could potentially create some additional rules targeting access from specific IP addresses.

One final note, I’m working on a management pack that contains these rules along with additional security rules along with other security related items that SCOM can provide. My only test environment is my lab, which is hardly a production grade environment. I do welcome feedback. While I cannot support this management pack, I can provide it if this is something you are interested in trying it out. My main goal is to keep the noise down to a minimum so that each alert is actionable. While that is not always easy to do, trying this out in other environments will go along way to getting it to that point.  If this is something you are interested in testing, please hit me up on linked in.