For the Sentinel people, I’m guessing that most of what I’m writting in here is not really news, but for those of us who have more experience in SCOM, I’m going to post some useful KQL to help validate that data is flowing properly.
From a time perspective, the data flow is not instantaneous. That said, if you have generated an alert, you should see it in Sentinel within a few minutes. The same goes for events. The wait should not be long.
There are a number of tables in Sentinel that you can use KQL to enumerate. Our data, however, is only going to two tables (the Alert table and Events table). Alerts, hopefully, are somewhat sporadic, but I’ll note that the namespace in the alert name should be unique enough to ensure that these alerts are there. A simple KQL query such as this should enumerate those items:
Please note that in both alerts and events, pay close attention to the Time Range in Sentinel. The default is 24 hours, but you can configure it, and Sentinel will also remember your custom configurations.
Events are a bit harder to track. Those will come in as event IDs no different than any other event. They also will fill up a bit faster, so you should see them. But that said, you can filter by what is coming through SCOM. The Source System with our events will always say OpsManager. That’s our first filter. The MP out of the box also sends 4624, 4625, and 4688 by default. The other events need the minimal configuration MP that is included by default in order to start seeing them. Once you have everything configured, testing event flow should be straight forward, as 4624 events are pretty common. This is a KQL query I’d run against Sentinel to ensure you’re getting data flow: