Monitors vs. Rules and how they Affect Alert Management

I’m going to do a back to the basics article here, and it’s not because things haven’t been written on the subject of monitors, rules, and SCOM, but because I don’t think they have been flushed out well, and to non-seasoned SCOM engineers, they are not exactly intuitive. As such, I wanted to walk down these two mechanisms that SCOM uses to monitor your environment and how they affect your alert management process, as there are some significant differences between the two that SCOM users need to be aware of (note this is users, the guys who close/acknowledge alerts, not just the engineers dedicated to SCOM). I would note that alert management can be a difficult thing to do.  It’s the hardest part of managing SCOM due to noise, internal politics, management issues and a whole host of other things.  I’d note that a colleague of mine has put together a nice chart on the subject.  There is not one right way to do alert management, but there are a lot of wrong ways.  Today, I’d like to discuss some things that can cause problems with alert management, namely monitors and rules.

So first, let’s start with the basics.  SCOM has 4 main data types that get stored in the DataWarehouse: event, state, performance, and alert data.  Of these four types, rules are responsible for collecting event and performance data.  These are not alert generating, just mechanisms to collect and store data for the purpose of reporting. You want to know the processor utilization of server X over the last 3 months? It’s a rule that collects this and tosses it in the DW.  The same goes with certain event collection rules. Their purpose in this scenario is solely to collect.

State data, however, is a different animal. This is the health of the objects monitored in SCOM. You view this data in the OperationsManager Database through Health Explorer.  Of note, only monitors can change state, so when you view Health Explorer, you will only see monitors in a red or yellow state (or all states if you unscope it), but you will never see alerts generated by a rule.  This is why on occasion you’ll open health explorer off of an alert only to see a completely healthy object underneath it.  This is because a rule generated the alert (note that on occasion a management pack designer will create both a monitor and a rule with the same name…. yeah… just to confuse you).  I’ll get to more about this in a minute.

That brings us to alerts, as both monitors and rules generate alerts.  Because of this, you need to view alerts as simply alerts and recognize that SCOM has two alert generating mechanisms, both of which behave a bit differently. Both can generate alerts.  Though neither monitors nor rules are required to (note that monitors almost always do generate alerts, but this is not necessarily a given).

So let’s start with rules:

  1. They don’t change state.  I covered this before, but a rule just tells you something happened.  It is not a definitive that your environment is unhealthy.  A common rule would be something like “SQL DB backup failed to complete”.  There’s nothing wrong with the health of your SQL environment, but this just might be something you want to look at. If you don’t care about the backup, then turn it off (and not the rule, the backup itself), as at this point it is clutter.
  2. Rules like to talk.  They tell you something needs to be looked at.  They tell you again.  They tell you again and again and again.  A poorly constructed rule can generate thousands of alerts in a short period of time… and sadly yes, I know this from experience 🙂  You can build alert flood protection into rules, but that has to be done when the rule is created, and if it’s created in a sealed management pack, the only way to do alert flood is to disable the rule and re-create it.  With alert flood protection, rules will increment a “repeat counter” in SCOM.  It is worth noting that this counter is not visible by default in your console, and you’ll want to right click on your columns, select “personalize view”, and check the box that says “Repeat Count” (usually near the bottom).  Rules with high repeat counts means that the condition has been detected on numerous occasions.  It means you might want to investigate what is going on, as closing the alert will only mean that it is coming back in a few minutes.
  3. Rules do not auto-close.  This means that if you’ve fixed the problem, you need to close the alert if it was generated by a rule. 
  4. Alert Generation cannot be turned off for rules in sealed management packs.  You can only disable the rule and/or recreate it if needed.

Now on to monitors:

  1. Monitors do change state.  This means that when a monitor detects an unhealthy condition, the object that contains it will go unhealthy.  Not all objects roll up through Windows Computer, so be careful there, as simply using health explorer from that view may surprise you as you won’t necessarily see what you want.  There is no super-class in SCOM, and I don’t believe that will change in 2016.
  2. Monitors have a mechanism for detecting a healthy condition.  You can do this via timer, via event ID, or some script. Bottom line is that the premise behind monitors is health; and as such, they need to have both healthy and unhealthy conditions.
  3. Alerts generated by monitors usually auto-resolve.  This means that when a monitor generates an alert, it will close the same alert.  This is an overridable parameter, so by all means check it, but it’s pretty rare to see auto-resolve turned off by default.  I cannot think of an example, though I’m sure someone has seen it…  That said, this can be a cause for noise in your environment if you have a monitor that is flip flopping back and forth between healthy and unhealthy.  Monitors doing this needs to be addressed, and since the alerts can go away, if you don’t have a good alert management process in place, you can miss these.  I typically like to use the most common alerts report in the generic report library, as this can tell you which monitors/rules are generating said alerts.  Every now and then, you’ll see items on that report that have no alerts in the console, and a flip flopping monitor can be the cause of this.
  4. Closing an alert generated by a monitor does not reset the health.  This means that if the underlying condition is still present, the health of the object in question will not go back to green.  Therefore, DO NOT CLOSE AN ALERT GENERATED BY A MONITOR. INSTEAD, USE HEALTH EXPLORER TO RESET THE HEALTH.  Sorry that I had to be loud there. This is probably the most common mistake someone new to SCOM makes (myself included by the way).  It is an even bigger problem with organizations that use SCOM in their NOC (network operations center), as typically the helpdesk/NOC tends to be made up of people with less experience, and the natural tendency is to close the alert when you think you’ve fixed it.  Sadly, some close it because they don’t know what to do with it.  When that happens, that particular alert will not generate alerts again unless the monitor goes healthy first.  Other monitors will continue to generate alerts, but the one in question will not generate until it resets.  This also causes grooming problems with the OperationsManager DB as state data isn’t groomed while the object is unhealthy. It would be nice if the product team put an update to generate a health reset in this scenario, but to my knowledge this has not happened.

OK, so that’s the scoop.  The big thing is to identify which mechanism generated the alert.  From there, you can craft a process on how to deal with it.  Just to answer one question I get asked a lot, the alert itself does tell you which mechanism generated it.

image

The Anatomy of a Good SCOM Alert Management Process – Part 3: Completing the Alert Management Life Cycle.

This is my final article in a 3 part series about Alert Management.  Part 1 is herePart 2 is here.

In the first two parts, we have already discussed why alert management is necessary and what tends to get in the way.  The final article in this series will cover what processes need to change or be added in order to facilitate good alert management.

The information below can be found in a number of documentation.  It is found in our health check that is provided for SCOM.  I’ve seen it in a number of presentations by a number of different Microsoft PFEs as well.  It shows up on some blogs too.  Simply put, there’s plenty out there that can put you in the right direction, though sometimes the WHY gets left out.

Daily Tasks

  • Check using, Operations Manager Management Packs that Operations Manager components are healthy
  • Check new Alerts from previous day are not still in state of ‘New’
  • Check for any unusual Alert or Event noise; investigate further if required (e.g. failing scripts, WMI issues, etc.,)
  • Check all Agents ‘Status’ for any that may be other than in a Green state
  • Review nightly backup jobs and database space allocation

Weekly Tasks

  • Schedule weekly meeting with IT Operational stake-holders \ and technical staff to review previous weeks most common alerts
  • Run the ‘Most Common Alerts’ report; investigate where necessary (see above bullet)

Monthly Tasks

  • Check for new Management Pack versions of those installed. Also check newly released management packs for suitability for your monitored environment
  • Run the baseline counters to access the ongoing performance of the Operations Manager environment as new agents are added and as new management packs are added

The task list doesn’t necessarily say WHO is responsible for completing these items, but I can say with reasonable certainty that if the SCOM administrator is the only one expected to do these tasks, he or she will fail.  Alert noise in particular is a team effort.  That needs to be handled directly by the people whose responsibility it is to maintain the systems they are monitoring.  That means that your AD guys should be watching the AD management pack.  The SQL guys need to be watching for SQL alerts, and so and so forth.  They know their products better than what the SCOM administrator will know them.

Tier one (and by proxy two) can certainly be the eyes and ears on the alerts that come through, but they need clearly defined escalation paths to the appropriate teams so that issues that aren’t easily resolved can be sent on to the correct tier three teams.  SCOM does a lot of self-alerting, so that escalation needs to include the SCOM administrators as issues such as WMI scripts not running, failing workflows, and various management group related alerts need to eventually make it to the SCOM administrator.  Issues such as health service heart beats (and by proxy gray agents when that heartbeat threshold is exceeded) need to be looked at right away.  Those indicate that an agent is not being monitored (at the least).  There are a number of reasons as to why that could be the case ranging from down systems (which you want to address), to bad processes, to some sort of client issue preventing communication.

Finally, all of this requires some sort of accountability.  Management doesn’t necessarily need to know why system X is red.  That’s usually the wrong question.  What management needs to be ensuring is that when there’s an alert from SCOM, SOMEONE is addressing it, and that someone also has a clear escalation path when they get to a point where they aren’t sure what’s going on.  To be clear, there’s going to be A LOT of this at first. That’s normal, and that also gets us into other key processes that need to be formed or adjusted in order to make this work.

  1. Server commission/decommission:  The most common issue for gray agents in SCOM is the failure to remove it from SCOM when the server is being retired.  It’s a simple change, but that has to be worked into your organizations current process.  On the flip side, ensuring that new servers are promptly added to SCOM is also important.  How that is managed is more organization specific.  You can auto-deploy via SCCM or AD (though don’t forget to change the remotely manageable settings if you do) or you can manually deploy through the SCOM console. You can also pre-install the image and use AD assignment as another option if that is preferred as well.  Keep in mind that systems in a DMZ will require certificates or a gateway to authenticate, which will further affect these processes.  You may also want to think about whether or not your development systems should be monitored the production environment (as these will usually generate more noise).  You may want to consider putting these systems in a dev SCOM environment (you’ll likely have no additional cost).
  2. Development Environment:  The Dev SCOM environment is also something that will have it’s own processes.  It will be used more for testing new MP rollouts, but in terms of being watched by your day to day support operations, it really is only being watched by the engineers responsible for their products as well as the SCOM administrator.
  3. Maintenance:  Server maintenance will need to be adjusted as well.  This might be the biggest process change (or in most cases, a new process altogether).  Rebooting a DC during production hours (for example) is somewhat normal since it really won’t cause an outage. If that DC is say the PDC emulator, each DC in SCOM will generate an alert when that DC goes down.  Domain controllers aren’t the only example here, as any time a server is rebooted.  Reboots can generate a health service heart beat alert if the server misses it’s ping or even a gray server if the reboot takes a while.  Application specific alerts can be generated as well, and SCOM specific alerts will generate when workflows are suddenly terminated.  This process is key as it’s a direct contributor to what is typically a daily amount of noise that SCOM generates.  SCOM isn’t smart enough to know which outages are acceptable to your organization and which ones aren’t.  It’s up to the org to tell it.  SCOM includes a nice tool called Maintenance Mode to assist with this (though it’s worth noting that this is a workflow that the management server orders a client to execute, so it can take a few minutes to go into affect).  System Center 2016 has also added the ability to schedule maintenance mode, so that noisy objects can be put in MM automatically when that 2:00 AM backup job is running.  If there’s a place for accountability, this one is key, as the actions of the guy doing the maintenance rarely get back to him or her as that same person is often not responsible for the alert that is generated.  Don’t assume this one will define itself organically.   It probably wont, and it may need some sort of management overview to get this one working well.
  4. Updates: The Update process is also one that will need adjusting.  It’s a bit of a dirty little secret in the SCOM world, but the simply using WSUS and/or SCCM will not suffice.  There’s a manual piece too involving running SQL scripts and importing SCOM’s updated internal MPs.  The process hasn’t changed as long as I’ve been doing it, but if you aren’t sure, Kevin Holman writes an updated one with just about every release (such as this one).
  5. Meeting with key teams:  This is specified as a weekly task, though as the environment is tuned (see below) and better maintained, this one can be happen less frequently.  The bottom line is that SCOM will generate alerts.  Some are easy to fix, such as the SQL SPN alerts that usually show up in a new deployment.  Some not so much.  If the SQL team doesn’t watch SQL alerts, they won’t know what is legit and what isn’t.  If they aren’t meeting with the SCOM admin on a somewhat consistent basis, then the tuning process won’t happen.  The Tier 1 and 2 guys start ignoring alerts when they see the same alerts over and over again with no guidance or attempts to fix them.  This process is key, as that communication doesn’t always happen organically.  SCOM also gives us some very nice reports in the ‘Generic Reports Library’ to help facilitate these meetings.  The ‘Most Common Alerts’ report mentioned above is a great example as you can configure the report to give you a good top down analysis of what is generating the most noise.  It will tell you which management packs are generating it.  Most importantly, what invariably happen is that the top 3-4 items usually account for 50-70% of your alert volume. So much of the tuning process can be accomplished by simply running this report and sitting down with the key teams.
  6. Tuning:  This really ties into those meetings, but at the same time, the tuning process needs to have it’s own process flow.  Noise needs to be escalated by the responsible teams to the SCOM administrator so that it can be addressed.  Noise can be addressed by threshold changes or by turning off certain rules/monitors.  To an extent, the SCOM administrators should push back on this as well.  In a highly functional team, this isn’t the case, but the default reaction that so many people have is just ‘turn it off.’  That’s not always the right answer.  It certainly can be in the right situation. For example, SCOM will tell you that website X or app pool Y is not running, and this can be normal in a lot of organizations.  But a lot of alerts aren’t that simple, and all of them need to be investigated, as some can be caused by events such as reboots, and many (such as SQL SPN alerts) are being ignored because the owner isn’t sure what to do.  This is not always readily apparent, and some back and forth here is healthy.
  7. Documentation:  In any health check, Microsoft asks if SCOM changes are documented.  I’ve yet to see a ‘yes’ answer here.  Truthfully, most organizations don’t handle change control that well, and IT people seem to be rather averse to documentation.  I’m sure part of that is that there’s already so much of it that it rarely gets read or ever makes sense. Other parts is that change management isn’t usually a daily event, and SCOM alert changes need to happen frequently. You really don’t need a change management meeting to facilitate those types of changes as the only real people affected are the SCOM admin and whomever owns the system/process in question, and waiting for those meetings can be painful to everyone responsible for dealing with said alerts.  I’ve always used a poor man’s implementation here.  Each management pack comes with a description and a version field that is easily editable.  Each time I make a change to a customization MP, I increment the version.  I put the new version number in the description field with a list of change(s) made, who made them, why, and who else was involved.  This is worthwhile for CYA, as management may occasionally ask if SCOM picked up on specific events, and you don’t want to try and explain why the alert for said event was turned off.  It’s also useful for role changes. Whenever a new SCOM administrator starts, the new SCOM admin tends to want to redo the environment because they have no clue what their predecessor(s) did and why.  That little history here can provide a quick rundown of the what and why which a new SCOM admin can use.  This assumes of course that a best practice is followed for customizations (don’t use the default MP, and by all means, do not simply dump all your changes into one MP).  It also assumes this is communicated.
  8. Backups:  This can be org specific, as spinning up a new SCOM environment might be preferable than maintaining terra-bytes of backup space.  This certainly is reasonable, but the org needs to actually make a decision here (and this one is a management decision in my opinion).  That said, if the other practices are being followed, suddenly those customizations are more important. Customized MPs can be backed up via a script or an MP, and this is usually the most important item needed for backups, as it takes the most work to restore manually.

I hope at this point that it is clear that rolling out SCOM is an org commitment.  A ‘check the box’ mentality won’t work here (though that’s probably true for all software).  There’s too much that needs to be discussed, and there’s too many processes that will require change.  If anything, this should provide any SCOM admin or member of management a good starting point to making these changes.