Security Monitoring 1.7.x is up

There isn’t much to this year’s update. I didn’t get a ton of feature requests, but I did get a couple and built them in. This is the change log.

  • Updated Local Admin Change rule to account for GPO enforced Local Admin Settings.
  • Fixed a couple of alert replacement bugs.
  • Added more overrides options for some powershell rules.
  • Updated Log Clearing alerts to allow for a user account override.
  • Added an exclusion to PowerShell logging for an Azure path as well as SCOM 2019 default path.
  • Fixed a bug with the alert description for the PowerShell running in memory rule.
  • Added rule for suspicious user logons.
  • Added an exclusion for WindowsAzureNetAgent on the service creation on DC rule.

Also worth noting that I’ve moved all content off of technet galleries and on to github. I’m not a github expert by any means, so I’m still figuring out the pull requests and fun stuff associated with that, but this could eventually become a community project with the right volunteers. Here is a link to both the previous and current content.

Security Monitoring: Using SCOM to capture Suspicious User Activity

This is an extension of a previous rule that I wrote to use SCOM to track executables being run in user writeable locations. The concept behind this is similar, and it tracks another behavior of an attacker. Once they’ve compromised an account, they are going to execute a bunch of code. I wrote rules tracking specific places in the OS where they are looking to do their thing. They can also do that thing from a user profile of a compromised account. Really any place within that profile is a potential target, so it makes it hard to track. As such, I’ve written a new rule for Suspicious User Activity. Much like the other 4688 events SCOM is tracking, this will generate an alert any time a .ps1, .psm1, .exe is run from a user context…

Now there’s a downside to this one. It has the potential to be noisy. I know personally that I have never had a problem running PowerShell scripts off of my desktop or some location in my user share. A more organized person might do that differently, but I’m kind of lazy like that, and I’m not alone either I suspect. What that means is some admins doing normal activity will likely trigger it. I’ve made it overridable for that reason, and it’s matching the command line parameter, so really anything in the path can be overridden. I’d be careful with this obviously, as you can exclude by say a user name, entire script path, or script name. Doing something such as a user name would effectively mean that if Joe Admin’s account is compromised, you’d never know… so some planning might be wise. You could potentially exclude the path that a user uses, or just turn it off for a specific server if that’s the issue at hand. Where you should be concerned is if you see an alert from say a service account or something like that… since those accounts shouldn’t be executing anything out of their user profile.

Security Monitoring: Update to Log Clearing Rules

I had a customer bring this to my attention, but there are tools out there that will backup logs and clear them as needed. This will generate an unwanted noise when an automated tool clears the log. As such, I’ve re-written the rule to allow for an account based override. Here’s how it works.

The original rule has been disabled. It’s still there if you want to enable it for any reason, but I haven’t (at least as of now) pulled it out of the XML. I’ve created and enabled a new rule that does the same thing, but this has an additional statement looking for a user account, which is can be overridden.

image

In the screenshot above, you can override with the specific service account that is being used to clear the logs.

This will also apply to the rule looking for the system log being cleared.

This will be in the May update to Security Monitoring.

Offline File Share Updating for Windows Defender

I ran into an interesting problem that I ended up spending way more time troubleshooting than what needed to be spent, in large part because our documentation is unfortunately incomplete.  The premise is fairly simple. You have a disconnected network that requires anti-virus definitions to be updated from a file share as opposed to Windows update because the network is disconnected. I know it’s not a common scenario, but it’s not unheard of either, and sadly our documentation is not the best here. Most our documentation on the matter can be found here, and per the doc, we need to specify the following GPO: Define Shares for Downloading Security Intelligence Updates.

The explanation on the GPO seems to agree as well, right?

image

Wrong. If all you do is set this and forget, you won’t getting updates. I did a lot of digging around and found a couple threads that touched on the issue, but not with a complete solution.

You can find them here and here.

First, what they got right. It’s not enough to simply create a share. You will also need specific folders for processor architecture. If you have a 64 bit processor, you need an x64 folder under the share. 32 bit will require an x86 folder, while ARM architecture will require an ARM folder. Defender checks the processor architecture of the system being updated and then contacts the share and looks for the folder associated with the appropriate architecture. Your file needs to be in that folder. Troubleshooting this part was more painful than I’d have liked. Log files are useless; you need process monitor. For the record, to troubleshoot this, perform the following steps:

  • Download and install Process Monitor.
  • Go to the file menu > start a capture.
  • From PowerShell, run the following Update-MPSignature -UpdateSource FileShares.  It’s worth noting you’ll probably get an error here. We continued to see an error with this even after fixing the issue. I don’t have an explanation for that as it presently stands.
  • Go back to Process Monitor, return to the file menu and deselect “start a capture.”
  • Do a search in Process Monitor for your file share in UNC format (i.e. \\servername\share)
  • Your first hit should be the the process that is attempting to access a share. You can then add a filter by the PID of this process if you so choose. It does help limit the noise.

Proc mon did confirm the folder structure mentioned above, but in two separate environments we saw different errors. One was access denied. The other was file not found. These weren’t too helpful. The issue was not permissions or even a missing file. The commenters in the links above were correct in noting the folder structure, but their perspective on permissions did not fit what we saw. They are right that the computer account needs access. It appears based on our troubleshooting that the network service account of the system doing the update is what is being used. Simply giving the domain computers group read access to our file share seemed to be enough for what that’s worth, but even with that setting we saw access denied errors. The security log on both the source and destination confirmed successful access however.

The fix was one other piece of GPO that is not clearly specified:

Define the order of sources for downloading security intelligence updates.

image

What we ended up finding out is that the first GPO does nothing but tell us what file share to go to. This GPO sets a fall back order, and by default, FileShares are not listed. Defender will check the registry and confirm the file share source, but it’s next step in updating is to following the source order, and since by default FileShares are not listed, your defender client will continue to check windows update, even though you defined a file share for it to use? Clear as mud? I thought so. Our doc does mention the source order, but it really doesn’t explain how this works. That said, it’s a requirement. FileShares must be listed as an option here. If it is not, they will not work.

Alternatively, you can use PowerShell if you’re in a one off situation:  Set-MpPreference -SignatureFallbackOrder “MicrosoftUpdateServer|InternalDefinitionUpdateServer|MMPC|FileShares”. You need the pipe to specify multiple sources, and if your network is disconnected, you can certainly remove the inappropriate values.

Hopefully that helps.

Security Monitoring: Updating Local Account Monitoring for GPO Enforced Settings

It was brought to my attention that the local admin group monitoring rule that I’ve written becomes incredibly noisy if GPO enforcement is used on local admin groups. Essentially what happens in that situation is that every time a machine applies the GPO, it fires off the 4732 and 4733 events that are being monitored. This can lead to thousands of alerts in this scenario. As such, I’ve re-written the rule, but I’d note that it gets a bit tricky. The main issue revolves around how SCOM processes events. It’s worth noting that SCOM only processes the XML, so using the friendly names won’t work. I’ve attached a couple of examples from my lab to show the difference.

This first screenshot is the friendly view. As you can see, it’s pretty straight forward. I used my admin account in this case to add a test account to the local administrator group on my SCOM server.

image

The XML view shows something completely different.

image

As you can see from the screenshots, for whatever reason, the SID is recorded in the XML view. I looked into a couple different ways to reduce noise for this; but unfortunately, the only workable solution would be to filter the rule based on the user IDs being recorded in the event, and since these are SIDS, we will need to obtain the SIDs from either ADSI Edit or from the Attribute Editor in Active Directory Users and Computers. I’ve baked 5 SID based overrides into this rule, which should hopefully be enough. It looks like this if you need to override it:

image

The easiest method to obtain the SID of the account(s) in question is to use the Attribute Editor in Active Directory Users and Computers. This requires advanced features to be turned on (this is in the view menu, and there should be a check box next to advanced features if it’s enabled).

It will look like this:

image

Please note for any bugs and/or feature requests, please reach out to me on LinkedIn.

Security Monitoring Partnering with Easy Tune

Tune the Security MP in a fraction of the time

Good news! I have written a Tuning Pack for my Security Management Pack which means you can tune the pack in a fraction of the time with Easy Tune from Cookdown. My Tuning Pack is live today on the Easy Tune Community Store

What is Easy tune?

Easy Tune is a new (and free) way of setting overrides to tune SCOM alerting. Traditionally, tuning a management pack is painful – its about 10 clicks to set a single override and some management packs contain thousands of workflows you may want to tune, multiply this problem by multiple groups and you can see how days can be spent tuning.

Easy Tune takes the head ache out of setting up overrides by allowing you to set them quickly with Tuning Packs (which are essentially CSV files)

clip_image002

To get you started there is a Community Store (a GitHub repro) containing community curated Tuning Packs which you can tune directly from, and if you think the Tuning Packs available could be improved or added to, you can submit a PR to change overrides or simply create your own Tuning Packs. This can be done by copying a Tuning Pack from the Community Store, creating one from management packs installed in your SCOM environment.

Tuning packs contain “levels” which you can tune to. A level is basically a list of overrides stored in a column of a tuning packs CSV. All Tuning Packs, including ones you create yourself automatically get levels “Discovery Only” and “MP Defaults” (as Easy Tune can work these out from the source MPs automatically), as well as being able to specify your own overrides – these are great for understanding what the MP author intended the value to be or for turning off all workflows which aren’t discoveries (which will reduce SCOMs workload and allow you to tune up on a per group basis as needed)

clip_image004

One of the great things about Tuning Packs is their simplicity – they are just CSV files which is great when it comes to reviewing overrides with other teams or updating override values. The can easily be reviewed with domain experts to agree desired tuning without looking at SCOM at all (lets face it, the SCOM console it not a thing of beauty).

Once you have reached alert nirvana with Easy Tune, there are is a config drift tool built in to shine a light on where your effective overrides have drifted from those you set, allowing you to keep your tuning in tip top shape.

The folk at cookdown give all of this away for free. I think it is an awesome tool that is a must for all SCOM admins

Easy Tune PRO

Cookdown sell a PRO version of Easy Tune too – it adds some excellent additional features:

· Time of day alert tuning – allows you to specify different override values for specific times/days. Very useful for ramping up monitoring for the 9am Monday morning logon storm where you want to make sure everything is working as it should or for disabling monitoring during the nightly backup job.

· Automation capabilities via PowerShell – allows you to script tuning and solve any unique issues you have with tuning which aren’t supported out of the box

· Rich override config drift detection – config drift is shown along side each Tuning Pack where the effective monitoring is not what you have set with Easy Tune and gives you tooling to see where the effective monitoring is set to help you resolve conflicts.

I haven’t had a chance to play with the PRO features but they look really cool (especially time of day alert tuning!) but you can read more about it here.

Error Integrating SCOM and SCORCH

I’m not an Orchestrator guy by any means, but I do have to pretend to be one on occasion when the customer asks. I ran into an interesting issue during the initial connection of SCOM to Orchestrator that turned out to be rather painful to troubleshoot. We had the SCOM console deployed on the Runbook server and deployed/registered the Integration pack for SCOM 2016. The console itself worked fine, but when testing the connection between Orchestrator and SCOM, we kept getting the following error:

Missing sdk binaries. Install System Center 2016 Operations Manager Operations Console first

The text might be a bit off, as I don’t have a screenshot, but that was effectively the gist of it. I did a lot of scouring online in order to find something, but I really didn’t find much. After digging around internally, I did learn a few things worth sharing on this blog:

  1. Order is important.
  2. The SCOM console isn’t necessarily necessary. That depends on what activities you use.
  3. Those DLLs need to be present in the assembly folder, but this info is also not easily available.

The first think you’ll want to do to troubleshoot is to make a small tweak to the registry:

Create a dword named “DisableCacheViewer” without quotes under “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion” Hexadecimal value “1”

clip_image002

What this does is slightly reorganize the assemblies in the C:\windows\assemblies folder.  If you navigate there, what you’re supposed to see is in the screenshot below. Each of the highlighted folders represents one of the SCOM SDK DLL files. If you drill into any of them, you’ll see another folder indicating a version number, and inside of that will be a copy of the DLL.

clip_image001

That’s all pretty straight forward. In my case, the issue was order. The SCOM console had been installed first. While the instructions say nothing, that apparently matters, so the fix is rather easy. Uninstall the SCOM console. Uninstall and unregister the integration pack… And then reboot.

Once it’s back up, deploy and register the IP. There’s no need for the console. In my case, that was enough, though sometimes you’ll have to go more drastic. One option is to try this script (obvioulsy changing the path as needed):

[System.Reflection.Assembly]::Load(“System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”)

$publish = New-Object System.EnterpriseServices.Internal.Publish

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Core.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.OperationsManager.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Runtime.dll”)