Security Monitoring 1.7.x is up

There isn’t much to this year’s update. I didn’t get a ton of feature requests, but I did get a couple and built them in. This is the change log.

  • Updated Local Admin Change rule to account for GPO enforced Local Admin Settings.
  • Fixed a couple of alert replacement bugs.
  • Added more overrides options for some powershell rules.
  • Updated Log Clearing alerts to allow for a user account override.
  • Added an exclusion to PowerShell logging for an Azure path as well as SCOM 2019 default path.
  • Fixed a bug with the alert description for the PowerShell running in memory rule.
  • Added rule for suspicious user logons.
  • Added an exclusion for WindowsAzureNetAgent on the service creation on DC rule.

Also worth noting that I’ve moved all content off of technet galleries and on to github. I’m not a github expert by any means, so I’m still figuring out the pull requests and fun stuff associated with that, but this could eventually become a community project with the right volunteers. Here is a link to both the previous and current content.

Security Monitoring: Using SCOM to capture Suspicious User Activity

This is an extension of a previous rule that I wrote to use SCOM to track executables being run in user writeable locations. The concept behind this is similar, and it tracks another behavior of an attacker. Once they’ve compromised an account, they are going to execute a bunch of code. I wrote rules tracking specific places in the OS where they are looking to do their thing. They can also do that thing from a user profile of a compromised account. Really any place within that profile is a potential target, so it makes it hard to track. As such, I’ve written a new rule for Suspicious User Activity. Much like the other 4688 events SCOM is tracking, this will generate an alert any time a .ps1, .psm1, .exe is run from a user context…

Now there’s a downside to this one. It has the potential to be noisy. I know personally that I have never had a problem running PowerShell scripts off of my desktop or some location in my user share. A more organized person might do that differently, but I’m kind of lazy like that, and I’m not alone either I suspect. What that means is some admins doing normal activity will likely trigger it. I’ve made it overridable for that reason, and it’s matching the command line parameter, so really anything in the path can be overridden. I’d be careful with this obviously, as you can exclude by say a user name, entire script path, or script name. Doing something such as a user name would effectively mean that if Joe Admin’s account is compromised, you’d never know… so some planning might be wise. You could potentially exclude the path that a user uses, or just turn it off for a specific server if that’s the issue at hand. Where you should be concerned is if you see an alert from say a service account or something like that… since those accounts shouldn’t be executing anything out of their user profile.

Security Monitoring: Update to Log Clearing Rules

I had a customer bring this to my attention, but there are tools out there that will backup logs and clear them as needed. This will generate an unwanted noise when an automated tool clears the log. As such, I’ve re-written the rule to allow for an account based override. Here’s how it works.

The original rule has been disabled. It’s still there if you want to enable it for any reason, but I haven’t (at least as of now) pulled it out of the XML. I’ve created and enabled a new rule that does the same thing, but this has an additional statement looking for a user account, which is can be overridden.


In the screenshot above, you can override with the specific service account that is being used to clear the logs.

This will also apply to the rule looking for the system log being cleared.

This will be in the May update to Security Monitoring.

Offline File Share Updating for Windows Defender

I ran into an interesting problem that I ended up spending way more time troubleshooting than what needed to be spent, in large part because our documentation is unfortunately incomplete.  The premise is fairly simple. You have a disconnected network that requires anti-virus definitions to be updated from a file share as opposed to Windows update because the network is disconnected. I know it’s not a common scenario, but it’s not unheard of either, and sadly our documentation is not the best here. Most our documentation on the matter can be found here, and per the doc, we need to specify the following GPO: Define Shares for Downloading Security Intelligence Updates.

The explanation on the GPO seems to agree as well, right?


Wrong. If all you do is set this and forget, you won’t getting updates. I did a lot of digging around and found a couple threads that touched on the issue, but not with a complete solution.

You can find them here and here.

First, what they got right. It’s not enough to simply create a share. You will also need specific folders for processor architecture. If you have a 64 bit processor, you need an x64 folder under the share. 32 bit will require an x86 folder, while ARM architecture will require an ARM folder. Defender checks the processor architecture of the system being updated and then contacts the share and looks for the folder associated with the appropriate architecture. Your file needs to be in that folder. Troubleshooting this part was more painful than I’d have liked. Log files are useless; you need process monitor. For the record, to troubleshoot this, perform the following steps:

  • Download and install Process Monitor.
  • Go to the file menu > start a capture.
  • From PowerShell, run the following Update-MPSignature -UpdateSource FileShares.  It’s worth noting you’ll probably get an error here. We continued to see an error with this even after fixing the issue. I don’t have an explanation for that as it presently stands.
  • Go back to Process Monitor, return to the file menu and deselect “start a capture.”
  • Do a search in Process Monitor for your file share in UNC format (i.e. \\servername\share)
  • Your first hit should be the the process that is attempting to access a share. You can then add a filter by the PID of this process if you so choose. It does help limit the noise.

Proc mon did confirm the folder structure mentioned above, but in two separate environments we saw different errors. One was access denied. The other was file not found. These weren’t too helpful. The issue was not permissions or even a missing file. The commenters in the links above were correct in noting the folder structure, but their perspective on permissions did not fit what we saw. They are right that the computer account needs access. It appears based on our troubleshooting that the network service account of the system doing the update is what is being used. Simply giving the domain computers group read access to our file share seemed to be enough for what that’s worth, but even with that setting we saw access denied errors. The security log on both the source and destination confirmed successful access however.

The fix was one other piece of GPO that is not clearly specified:

Define the order of sources for downloading security intelligence updates.


What we ended up finding out is that the first GPO does nothing but tell us what file share to go to. This GPO sets a fall back order, and by default, FileShares are not listed. Defender will check the registry and confirm the file share source, but it’s next step in updating is to following the source order, and since by default FileShares are not listed, your defender client will continue to check windows update, even though you defined a file share for it to use? Clear as mud? I thought so. Our doc does mention the source order, but it really doesn’t explain how this works. That said, it’s a requirement. FileShares must be listed as an option here. If it is not, they will not work.

Alternatively, you can use PowerShell if you’re in a one off situation:  Set-MpPreference -SignatureFallbackOrder “MicrosoftUpdateServer|InternalDefinitionUpdateServer|MMPC|FileShares”. You need the pipe to specify multiple sources, and if your network is disconnected, you can certainly remove the inappropriate values.

Hopefully that helps.

Security Monitoring Partnering with Easy Tune

Tune the Security MP in a fraction of the time

Good news! I have written a Tuning Pack for my Security Management Pack which means you can tune the pack in a fraction of the time with Easy Tune from Cookdown. My Tuning Pack is live today on the Easy Tune Community Store

What is Easy tune?

Easy Tune is a new (and free) way of setting overrides to tune SCOM alerting. Traditionally, tuning a management pack is painful – its about 10 clicks to set a single override and some management packs contain thousands of workflows you may want to tune, multiply this problem by multiple groups and you can see how days can be spent tuning.

Easy Tune takes the head ache out of setting up overrides by allowing you to set them quickly with Tuning Packs (which are essentially CSV files)


To get you started there is a Community Store (a GitHub repro) containing community curated Tuning Packs which you can tune directly from, and if you think the Tuning Packs available could be improved or added to, you can submit a PR to change overrides or simply create your own Tuning Packs. This can be done by copying a Tuning Pack from the Community Store, creating one from management packs installed in your SCOM environment.

Tuning packs contain “levels” which you can tune to. A level is basically a list of overrides stored in a column of a tuning packs CSV. All Tuning Packs, including ones you create yourself automatically get levels “Discovery Only” and “MP Defaults” (as Easy Tune can work these out from the source MPs automatically), as well as being able to specify your own overrides – these are great for understanding what the MP author intended the value to be or for turning off all workflows which aren’t discoveries (which will reduce SCOMs workload and allow you to tune up on a per group basis as needed)


One of the great things about Tuning Packs is their simplicity – they are just CSV files which is great when it comes to reviewing overrides with other teams or updating override values. The can easily be reviewed with domain experts to agree desired tuning without looking at SCOM at all (lets face it, the SCOM console it not a thing of beauty).

Once you have reached alert nirvana with Easy Tune, there are is a config drift tool built in to shine a light on where your effective overrides have drifted from those you set, allowing you to keep your tuning in tip top shape.

The folk at cookdown give all of this away for free. I think it is an awesome tool that is a must for all SCOM admins

Easy Tune PRO

Cookdown sell a PRO version of Easy Tune too – it adds some excellent additional features:

· Time of day alert tuning – allows you to specify different override values for specific times/days. Very useful for ramping up monitoring for the 9am Monday morning logon storm where you want to make sure everything is working as it should or for disabling monitoring during the nightly backup job.

· Automation capabilities via PowerShell – allows you to script tuning and solve any unique issues you have with tuning which aren’t supported out of the box

· Rich override config drift detection – config drift is shown along side each Tuning Pack where the effective monitoring is not what you have set with Easy Tune and gives you tooling to see where the effective monitoring is set to help you resolve conflicts.

I haven’t had a chance to play with the PRO features but they look really cool (especially time of day alert tuning!) but you can read more about it here.

Error Integrating SCOM and SCORCH

I’m not an Orchestrator guy by any means, but I do have to pretend to be one on occasion when the customer asks. I ran into an interesting issue during the initial connection of SCOM to Orchestrator that turned out to be rather painful to troubleshoot. We had the SCOM console deployed on the Runbook server and deployed/registered the Integration pack for SCOM 2016. The console itself worked fine, but when testing the connection between Orchestrator and SCOM, we kept getting the following error:

Missing sdk binaries. Install System Center 2016 Operations Manager Operations Console first

The text might be a bit off, as I don’t have a screenshot, but that was effectively the gist of it. I did a lot of scouring online in order to find something, but I really didn’t find much. After digging around internally, I did learn a few things worth sharing on this blog:

  1. Order is important.
  2. The SCOM console isn’t necessarily necessary. That depends on what activities you use.
  3. Those DLLs need to be present in the assembly folder, but this info is also not easily available.

The first think you’ll want to do to troubleshoot is to make a small tweak to the registry:

Create a dword named “DisableCacheViewer” without quotes under “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion” Hexadecimal value “1”


What this does is slightly reorganize the assemblies in the C:\windows\assemblies folder.  If you navigate there, what you’re supposed to see is in the screenshot below. Each of the highlighted folders represents one of the SCOM SDK DLL files. If you drill into any of them, you’ll see another folder indicating a version number, and inside of that will be a copy of the DLL.


That’s all pretty straight forward. In my case, the issue was order. The SCOM console had been installed first. While the instructions say nothing, that apparently matters, so the fix is rather easy. Uninstall the SCOM console. Uninstall and unregister the integration pack… And then reboot.

Once it’s back up, deploy and register the IP. There’s no need for the console. In my case, that was enough, though sometimes you’ll have to go more drastic. One option is to try this script (obvioulsy changing the path as needed):

[System.Reflection.Assembly]::Load(“System.EnterpriseServices, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”)

$publish = New-Object System.EnterpriseServices.Internal.Publish

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Core.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.OperationsManager.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Runtime.dll”)

Installing Microsoft Identity Manager 2016–Part 2

We addressed the pre-requisites here.  As you can see, there was quite a bit to accomplish to even start working on MIM. Now for the good stuff. There’s essentially two components in play for the bulk of the installs. You have the Synchronization service as well as the portal which covers the bulk of the MIM install. It’s worth noting that plenty of mistakes can be made here, so think and plan this one out carefully.

To start, I’m going to make a couple notes:

1) Log on as the MIMInstall account. I mentioned this before, but there does seem to be some ties into the account that installs it. I recommend a generic install account here that is an admin on the server in question. You can disable it later on once you’ve granted all the appropriate rights and what not and simple re-enable as needed.

2) Once you have the CD/ISO mounted, create a temp folder somewhere for logging information. You may need it. Launch an elevated command prompt and run the following from the synchronization service folder in the CD (I’m using c:\temp for my temp folder):

msiexec /i “Synchronization service.msi” /L*v c:\temp\MIM_SyncService_Install.log

That will create a log file in the c:\temp folder, which can be useful if you need to troubleshoot.

Click Next through the first wizard and accept the license agreement and click next again.

This screen as well is pretty straight forward, click next:


Here’s where you have hopefully made a choice, I’m hosting my SQL server locally, but if you aren’t, you need to change this.


The next screen wants the creds to your MIM Service Account. This is also pretty easy.


Next are the various groups you created previously (by the way, if you haven’t already done this, add the appropriate people to the Admins group):


You probably want to open firewall ports. If you aren’t using Windows Firewall, you’ll want to do this manually:


At this point, you can click Install. I’ll save you the boring screenshot, and the install shouldn’t take too long. That said, you’ll want to launch the Synchronization service once done. If it doesn’t start, you have a problem. Go back and figure this out, because trust me, if this isn’t working right, it makes the portal portion even harder. When it’s working, you should see this screen when you launch it without error:


If you have don’t this already, you might want to get around to those aliases I mentioned in the first piece. You’re going to install the portal next, which sits on top of the SharePoint site collection you setup previously.  You can still work off of the CD you mounted earlier, but you’ll need to navigate to the service and portal folder and run the following command: 

msiexec /i “Service and Portal.msi” /L*v c:\temp\MIM_Service_Install.log

It’s the same concept. You’re putting a log in c:\temp so you can troubleshoot what’s going on here. Like before, you can click next through welcome and accept the licensing agreement. You can decide at this point if you want to use PAM. I chose no here, as this is a lab. It is, however, something that’s highly encouraged as it enables just in time administration.


Next, you identify a database server and database name:


If you have Exchange running locally or online, do something here. I don’t at this point, so I’m going to uncheck the top two boxes and set this to localhost:


In a prod environment, I’d have a CA doing a real certificate, for what that’s worth. Here, I’ll use a self-signed:


Next, you define that service account. Note the warning here about that email. It’s kind of important.


Depending on how you configured it, you may receive a warning about the account being insecure. Go back to the guide and figure that part out. Ensure you did it right. Next you need the Management Agent Account. The server name here is the server where the synchronization service was previously installed. I kept these all on the same machine, but I’m also running this in lab with one user.


Side note, but if you get this warning, cancel the install and revisit issues with the Synchronization service:


If you don’t, you’ll move on to this screen. Enter the server name hosting the SharePoint Site Collection:


And then enter the site collection URL you created earlier:


Now you’ll be prompted for the password registration portal URL… note that this does need to have the http, it’s cut off in my screenshot. And for some silly reason, MSFT has * in front of the URL in their example. Don’t do that. I tried it for fun. It doesn’t work.


Now for ports, like before, open them if you have the Windows firewall on. If not, call your Firewall person and have him/her do it:


And you thought we were done Smile. Not even close. One more service account, this time your sspr which I’d add will be done again shortly. Also, You need the password registration URL (without the HTTP this time) and you might want to open ports. Note that I’m doing this on 80 because I don’t have a CA. But if you do, you should issue a web cert here and use 443.


If you use 80, you’ll get a security warning. Otherwise, you need to enter the servername hosting the MIM Service.I did choose to keep this internal, but you may want this on an extranet. Choose accordingly:


And for grins, you get to put your SSPR in again, this time referencing the password reset portal:


If you’re not secure here (i.e. no https), you’ll get another warning. Go back and setup https. But if not, click next. That brings you to this screen, much like the above. This time though, you need to configure the password reset URL to go with the server hosting the MIM Service installed above:


Now you’re done (sort of). Click Install. If you have no errors, you can go on to the next part.