Security Monitoring: Updating Local Account Monitoring for GPO Enforced Settings

It was brought to my attention that the local admin group monitoring rule that I’ve written becomes incredibly noisy if GPO enforcement is used on local admin groups. Essentially what happens in that situation is that every time a machine applies the GPO, it fires off the 4732 and 4733 events that are being monitored. This can lead to thousands of alerts in this scenario. As such, I’ve re-written the rule, but I’d note that it gets a bit tricky. The main issue revolves around how SCOM processes events. It’s worth noting that SCOM only processes the XML, so using the friendly names won’t work. I’ve attached a couple of examples from my lab to show the difference.

This first screenshot is the friendly view. As you can see, it’s pretty straight forward. I used my admin account in this case to add a test account to the local administrator group on my SCOM server.

image

The XML view shows something completely different.

image

As you can see from the screenshots, for whatever reason, the SID is recorded in the XML view. I looked into a couple different ways to reduce noise for this; but unfortunately, the only workable solution would be to filter the rule based on the user IDs being recorded in the event, and since these are SIDS, we will need to obtain the SIDs from either ADSI Edit or from the Attribute Editor in Active Directory Users and Computers. I’ve baked 5 SID based overrides into this rule, which should hopefully be enough. It looks like this if you need to override it:

image

The easiest method to obtain the SID of the account(s) in question is to use the Attribute Editor in Active Directory Users and Computers. This requires advanced features to be turned on (this is in the view menu, and there should be a check box next to advanced features if it’s enabled).

It will look like this:

image

Please note for any bugs and/or feature requests, please reach out to me on LinkedIn.

Security Monitoring Partnering with Easy Tune

Tune the Security MP in a fraction of the time

Good news! I have written a Tuning Pack for my Security Management Pack which means you can tune the pack in a fraction of the time with Easy Tune from Cookdown. My Tuning Pack is live today on the Easy Tune Community Store

What is Easy tune?

Easy Tune is a new (and free) way of setting overrides to tune SCOM alerting. Traditionally, tuning a management pack is painful – its about 10 clicks to set a single override and some management packs contain thousands of workflows you may want to tune, multiply this problem by multiple groups and you can see how days can be spent tuning.

Easy Tune takes the head ache out of setting up overrides by allowing you to set them quickly with Tuning Packs (which are essentially CSV files)

clip_image002

To get you started there is a Community Store (a GitHub repro) containing community curated Tuning Packs which you can tune directly from, and if you think the Tuning Packs available could be improved or added to, you can submit a PR to change overrides or simply create your own Tuning Packs. This can be done by copying a Tuning Pack from the Community Store, creating one from management packs installed in your SCOM environment.

Tuning packs contain “levels” which you can tune to. A level is basically a list of overrides stored in a column of a tuning packs CSV. All Tuning Packs, including ones you create yourself automatically get levels “Discovery Only” and “MP Defaults” (as Easy Tune can work these out from the source MPs automatically), as well as being able to specify your own overrides – these are great for understanding what the MP author intended the value to be or for turning off all workflows which aren’t discoveries (which will reduce SCOMs workload and allow you to tune up on a per group basis as needed)

clip_image004

One of the great things about Tuning Packs is their simplicity – they are just CSV files which is great when it comes to reviewing overrides with other teams or updating override values. The can easily be reviewed with domain experts to agree desired tuning without looking at SCOM at all (lets face it, the SCOM console it not a thing of beauty).

Once you have reached alert nirvana with Easy Tune, there are is a config drift tool built in to shine a light on where your effective overrides have drifted from those you set, allowing you to keep your tuning in tip top shape.

The folk at cookdown give all of this away for free. I think it is an awesome tool that is a must for all SCOM admins

Easy Tune PRO

Cookdown sell a PRO version of Easy Tune too – it adds some excellent additional features:

· Time of day alert tuning – allows you to specify different override values for specific times/days. Very useful for ramping up monitoring for the 9am Monday morning logon storm where you want to make sure everything is working as it should or for disabling monitoring during the nightly backup job.

· Automation capabilities via PowerShell – allows you to script tuning and solve any unique issues you have with tuning which aren’t supported out of the box

· Rich override config drift detection – config drift is shown along side each Tuning Pack where the effective monitoring is not what you have set with Easy Tune and gives you tooling to see where the effective monitoring is set to help you resolve conflicts.

I haven’t had a chance to play with the PRO features but they look really cool (especially time of day alert tuning!) but you can read more about it here.

Error Integrating SCOM and SCORCH

I’m not an Orchestrator guy by any means, but I do have to pretend to be one on occasion when the customer asks. I ran into an interesting issue during the initial connection of SCOM to Orchestrator that turned out to be rather painful to troubleshoot. We had the SCOM console deployed on the Runbook server and deployed/registered the Integration pack for SCOM 2016. The console itself worked fine, but when testing the connection between Orchestrator and SCOM, we kept getting the following error:

Missing sdk binaries. Install System Center 2016 Operations Manager Operations Console first

The text might be a bit off, as I don’t have a screenshot, but that was effectively the gist of it. I did a lot of scouring online in order to find something, but I really didn’t find much. After digging around internally, I did learn a few things worth sharing on this blog:

  1. Order is important.
  2. The SCOM console isn’t necessarily necessary. That depends on what activities you use.
  3. Those DLLs need to be present in the assembly folder, but this info is also not easily available.

The first think you’ll want to do to troubleshoot is to make a small tweak to the registry:

Create a dword named “DisableCacheViewer” without quotes under “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion” Hexadecimal value “1”

clip_image002

What this does is slightly reorganize the assemblies in the C:\windows\assemblies folder.  If you navigate there, what you’re supposed to see is in the screenshot below. Each of the highlighted folders represents one of the SCOM SDK DLL files. If you drill into any of them, you’ll see another folder indicating a version number, and inside of that will be a copy of the DLL.

clip_image001

That’s all pretty straight forward. In my case, the issue was order. The SCOM console had been installed first. While the instructions say nothing, that apparently matters, so the fix is rather easy. Uninstall the SCOM console. Uninstall and unregister the integration pack… And then reboot.

Once it’s back up, deploy and register the IP. There’s no need for the console. In my case, that was enough, though sometimes you’ll have to go more drastic. One option is to try this script (obvioulsy changing the path as needed):

[System.Reflection.Assembly]::Load(“System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”)

$publish = New-Object System.EnterpriseServices.Internal.Publish

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Core.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.OperationsManager.dll”)

$publish.GacInstall(“C:\temp\OPSMGR\SDK Binaries\Microsoft.EnterpriseManagement.Runtime.dll”)

Installing Microsoft Identity Manager 2016–Part 2

We addressed the pre-requisites here.  As you can see, there was quite a bit to accomplish to even start working on MIM. Now for the good stuff. There’s essentially two components in play for the bulk of the installs. You have the Synchronization service as well as the portal which covers the bulk of the MIM install. It’s worth noting that plenty of mistakes can be made here, so think and plan this one out carefully.

To start, I’m going to make a couple notes:

1) Log on as the MIMInstall account. I mentioned this before, but there does seem to be some ties into the account that installs it. I recommend a generic install account here that is an admin on the server in question. You can disable it later on once you’ve granted all the appropriate rights and what not and simple re-enable as needed.

2) Once you have the CD/ISO mounted, create a temp folder somewhere for logging information. You may need it. Launch an elevated command prompt and run the following from the synchronization service folder in the CD (I’m using c:\temp for my temp folder):

msiexec /i “Synchronization service.msi” /L*v c:\temp\MIM_SyncService_Install.log

That will create a log file in the c:\temp folder, which can be useful if you need to troubleshoot.

Click Next through the first wizard and accept the license agreement and click next again.

This screen as well is pretty straight forward, click next:

image

Here’s where you have hopefully made a choice, I’m hosting my SQL server locally, but if you aren’t, you need to change this.

image

The next screen wants the creds to your MIM Service Account. This is also pretty easy.

image

Next are the various groups you created previously (by the way, if you haven’t already done this, add the appropriate people to the Admins group):

image

You probably want to open firewall ports. If you aren’t using Windows Firewall, you’ll want to do this manually:

image

At this point, you can click Install. I’ll save you the boring screenshot, and the install shouldn’t take too long. That said, you’ll want to launch the Synchronization service once done. If it doesn’t start, you have a problem. Go back and figure this out, because trust me, if this isn’t working right, it makes the portal portion even harder. When it’s working, you should see this screen when you launch it without error:

image

If you have don’t this already, you might want to get around to those aliases I mentioned in the first piece. You’re going to install the portal next, which sits on top of the SharePoint site collection you setup previously.  You can still work off of the CD you mounted earlier, but you’ll need to navigate to the service and portal folder and run the following command: 

msiexec /i “Service and Portal.msi” /L*v c:\temp\MIM_Service_Install.log

It’s the same concept. You’re putting a log in c:\temp so you can troubleshoot what’s going on here. Like before, you can click next through welcome and accept the licensing agreement. You can decide at this point if you want to use PAM. I chose no here, as this is a lab. It is, however, something that’s highly encouraged as it enables just in time administration.

image

Next, you identify a database server and database name:

image

If you have Exchange running locally or online, do something here. I don’t at this point, so I’m going to uncheck the top two boxes and set this to localhost:

image

In a prod environment, I’d have a CA doing a real certificate, for what that’s worth. Here, I’ll use a self-signed:

image

Next, you define that service account. Note the warning here about that email. It’s kind of important.

image

Depending on how you configured it, you may receive a warning about the account being insecure. Go back to the guide and figure that part out. Ensure you did it right. Next you need the Management Agent Account. The server name here is the server where the synchronization service was previously installed. I kept these all on the same machine, but I’m also running this in lab with one user.

image

Side note, but if you get this warning, cancel the install and revisit issues with the Synchronization service:

image

If you don’t, you’ll move on to this screen. Enter the server name hosting the SharePoint Site Collection:

image

And then enter the site collection URL you created earlier:

image

Now you’ll be prompted for the password registration portal URL… note that this does need to have the http, it’s cut off in my screenshot. And for some silly reason, MSFT has * in front of the URL in their example. Don’t do that. I tried it for fun. It doesn’t work.

image

Now for ports, like before, open them if you have the Windows firewall on. If not, call your Firewall person and have him/her do it:

image

And you thought we were done Smile. Not even close. One more service account, this time your sspr which I’d add will be done again shortly. Also, You need the password registration URL (without the HTTP this time) and you might want to open ports. Note that I’m doing this on 80 because I don’t have a CA. But if you do, you should issue a web cert here and use 443.

image

If you use 80, you’ll get a security warning. Otherwise, you need to enter the servername hosting the MIM Service.I did choose to keep this internal, but you may want this on an extranet. Choose accordingly:

image

And for grins, you get to put your SSPR in again, this time referencing the password reset portal:

image

If you’re not secure here (i.e. no https), you’ll get another warning. Go back and setup https. But if not, click next. That brings you to this screen, much like the above. This time though, you need to configure the password reset URL to go with the server hosting the MIM Service installed above:

image

Now you’re done (sort of). Click Install. If you have no errors, you can go on to the next part.

Installing Microsoft Identity Manager 2016–Part 1

I’m moving off the SCOM world a bit as I’ve been tasked with learning a new product. My cyber mentor told me that this is one that I’ll find myself redoing multiple times and unfortunately, he wasn’t kidding. I did some looking around and didn’t find much in terms of good step by steps on this, so I decided to create my own. For the most part, I’m working off of the documentation found here, but there are some gaps in it along with a few lessons learned that aren’t clearly documented. Hopefully, someone will find this useful.

First, let’s talk about what it is.

It’s main purpose is to synchronize your corporate identities off of a master copy. Let’s say for instance, that your authoritative identity store is an HR database. MIM can be used to take the data entered in the HR database and push it to Active Directory, Exchange, a 3rd party ticketing tool, and well, just about anything if you take the time to install the connectors and (if necessary) write the scripts. It’s pretty powerful and automates a lot of manual processes that are often filled with user error. It can also used to synchronize identities between your on premise infrastructure and your Azure infrastructure.  Likewise, it comes with really cool features such as the ability to enable self service password resets via a registration and reset portal. This allows users to quickly reset their own passwords instead of waiting on hold for a help desk representative to do it for them. It’s quicker and it takes a load off of your call center since password resets are one of the higher volume calls. As a security person, I think there are better solutions to passwords in general (Microsoft is going 2 factor and password free for the record), and I think that’s an attainable goal, but organizations aren’t often changing that fast, and something like MIM can help them transition to that.

If you want to read more, this is a good starting point. This is basically the next evolution of ForeFront Identity Manager (FIM), so you’ll see a lot of these terms used interchangeably.

Pre-Requisites

Ok, so I’m done with the sales pitch. Sorry about all of that. I’m guessing if you’re reading this, you’re already interested in it. From a pre-requisite standpoint, the product is somewhat complicated. You need active directory, which to be fair everyone has. The web portals sit on top of a SharePoint infrastructure, and of course there’s a SQL database involved as well. So lots of setup. We break our documentation into the domain setup, server setup, Exchange, SharePoint, and SQL setup. I’m not going to step by step all of these, but I am going to highlight the basic needs.  First of note, I don’t have Exchange in my lab presently, so I’m not doing that at all.

  • First step, figure out your URLs. For my demo purposes, I’m using mim.ctm.contoso.com as the main portal. I’m using passwordregistration.ctm.contoso.com and passwordreset.ctm.contoso.com for registering and resetting passwords. I’m going to house all of that one one SharePoint Front end server, which also double as my MIM server. You can distribute which is highly recommended for a non-lab environment, but for the purposes of my lab, SQL, SharePoint, and MIM will all be housed on the same server. Since this is web based, https is highly recommended. I don’t have the certificate infrastructure to do that, so I’m going to stick with http in my lab, but I think the average reader understands that this is not ideally a good answer. You can however, start with http and change it to https later on since this is all using IIS and SharePoint. I do highly recommend that you involve your SharePoint admin for this stage of planning so they can carve off the site collection and what not.
  • Second step, figure out your accounts. I would note that the scripts here are for the most part good. There is, however, a typo in the account creation scripts with both the MIMInstall account and the MIMMA account having MIMMA as the name. You’ll want to correct that if you’re just going to use our scripts. It’s wroth noting that our scripts assume a different URL than what I’m using, so you’ll probably want to drop these into an ISE window and edit them to fit your needs, and you’ll probably not want to use the same password for all of these accounts either. It’s also worth noting that this is heavily dependent on Kerberos, and as such you’ll need to setup those SPNs. Your SQL and SharePoint admins may have different means of configuring all of this. If for instance, you aren’t using a SQL service account to manage SQL, than the SPN information will need to be tied to the SQL server machine name and what not. This one will cost you time troubleshooting if you’re not careful about it, so pay attention to all of that.
  • Third step, install SQL server. I’m not a fan of the instructions found here. It’s a simple script. There’s nothing wrong with that, but it really doesn’t tell you much. You need the database engine obviously. You need SQL agent set to auto start. You also need Full Text Search installed.  The MIM install will fail if you don’t do that.  Your MIM Install account will need to be an SA over the instance. Our script assumes that the account you used in step 2 is being used as the SQL server service account. Don’t forget that though if you install using the wizard, as there are Kerberos settings tied to it.
  • Fourth step, install SharePoint. These scripts all work, but you’ll need to modify the URLs to suit your needs.
  • Fifth step, Exchange. I don’t have anything to add here because I didn’t do it. You’re on your own here.
  • Sixth step, MIM server setup. The server setup documentation is relatively straight forward. Your MIMInstall account will need to be a local admin, and as the guide notes, your service accounts need the logon as a service right. You may need to set this part via GPO depending on your environment, and of course the SQL and SharePoint accounts will need that too, but if you’re been working with your SharePoint admin and SQL DBA, that’s been addressed. The server install piece is straight forward, but the scripts were written for older versions of windows, so they won’t work on a Server 2016 deployment (there is no application server role anymore). I’ve modified it slightly:

import-module ServerManager
Install-WindowsFeature Web-WebServer,Net-Framework-Features,rsat-ad-powershell,Web-Mgmt-Tools,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –includeallsubfeature -restart -source d:\sources\SxS

Note also the source location, as not all of these are on a standard server build, especially if you’re deploying in the cloud.

Anyways, that’s it for the setup. Next step will cover the sync service as well as the portal installs.

Referencing Username and Password Credentials in an MDT Task Sequence

I don’t usually write about MDT, and I do not plan on making this a habit either. However, I ran into an issue doing some automation for a customer that I felt needed documentation. This is just as much for my own benefit as it is the for the 3 people who may find this useful, but there wasn’t much written about this online. Whenever I find myself in this situation, I usually turn it into a blog if the solution is not something that’s naturally intuitive.

First, to back up a bit. Microsoft Deployment Toolkit (MDT) is a free method of creating light and zero touch deployments for Operating System images across physical and/or virtual platforms. It’s generally installed as a sub-component to SCCM (which is not free). SCCM can provide more automation for those types of tasks and gets you around the problem that I’m describing here, but MDT can exist in a standalone environment especially if System Center is too expensive to purchase. Even at MSFT we do have use for MDT in certain types of engagements as it can be used to automate some of our own solutions that we bring into a customer environment without needing to setup a System Center infrastructure.

In this particular case, we needed to make use of some of the variables in the MDT scripting environment. It’s worth noting that there are a ton of variables available to use. A full list of what is available for you to use can be found in the variables.dat file that exists locally on the machine being built. This file is generated during deployment and is then removed once deployment is complete. I’m sure there’s a place on the MDT server which houses this, but I never got that far as this file was not removed when my task sequence was failing. The long and short is that you can edit this file with notepad and see all of the variables available for you to use in a scripting environment.

From a scripting standpoint, these variables can be referenced within the script being executed in your task sequence, allowing you some very powerful automation options. The problem, as we discovered is that some variables are not represented in clear text. As you can guess, this is typically username and password data. Both of these are in the variables.dat file, but not in readable form as they are store in base64 format. To be abundantly clear, nothing about this is secure. It’s meant only to prevent prying eyes from seeing usernames and passwords in clear text. Converting from base64 to ASCII is a single line of PowerShell, so whatever credentials you choose to put in MDT need to have only the permissions needed to do what task you need it to do. As well, physical access to your build environment is also paramount. Keep that in mind as well.

To start, we need to load the TS environment. This is easy to do and not hard to find online:

$tsenv = New-Object -ComObject Microsoft.SMS.TSEnvironment

At this point, we need to reference the variable. In my case, I’m going to use the domain join account that someone specifies during the installation wizard. You do this as such:

$Username = $Global:TSEnv.Value(“OSDJOINACCOUNT”)
$Password = $Global:TSEnv.Value(“OSDJOINPASSWORD”)

$Domain = $Global:TSEnv.Value(“DOMAINNAME”)

Again, that’s pretty easy. The $Domain value is in clear text, so there’s not much too this, but if you were to pull the OSDJOINACCOUNT and OSDJOINPASSWORD variables out of the variables.dat file, you’ll see something non-readable. The hard work in my case was figuring out what this format was. My assumption was that it was probably hashed like we do a typical password. That wasn’t the case and it took a lot of digging around to find an offhand comment on reddit that these were actually base64.  For a PowerShell professional, this is pretty easy, but for those of us who don’t breathe ISE, this is a bit more difficult. From there, you need this step:

$Password2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Password))
$Username2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Username))

Now we have the Username and Password in a format that a Task Sequence can use. The rest is pretty standard. I’m converting the password to a secure string and creating a credential object in PowerShell that can use it.

$CredPass = ConvertTo-SecureString $Password2 -AsPlainText -Force


$Credential = New-Object System.Management.Automation.PSCredential($Username2,$CredPass)

At this point, you have a credential that works, and whatever PowerShell command you’re trying to script that uses said credential can be given the $Credential variable.

That’s it.

Don’t ask me how to do it in VBS. I have no plans on learning that. 

Security Monitoring Future Plans (May 2019)

The good news about this project is that we’ve been able to knock out a lot of low hanging fruit that can be used to detect some of the bread crumbs that an attacker leaves behind as well as identifying where legacy protocols are being used. The bad news is that most of the low hanging fruit has been picked clean. This space will be used to help identify and track future plans.

I’m going to stick with a 1 year cadence. This has been developed mostly by me on my own time, and as such there’s only so many hours to go around. My current plans are as follows:

  • I would like to develop an administrative account monitoring component targeting admin accounts. I’m not sure how easily this will be able to be accomplished. Enumerating these against a DC is not that hard to do, but in order to alert on these, these objects would need to be created on each and every DC. This isn’t realistic from a performance standpoint. There’s currently an unhosted class and disabled discovery in this MP, but nothing is targeted against it. The hope would be to come up with a way to start tracking admin accounts in general, logons outside of business hours, etc.
  • I’m hoping to delve more into WMI monitoring with the next release.
  • There are a few rules that I could see re-writing to add overridable parameters.
  • Likely going to write some detection mechanisms around this SCOM vulnerability.

This is not a big list presently, but as time permits I hope to grow it. Any suggestions are always appreciated.