Cyber-Security for the IT Professional: Part 1 Terms and Assumptions

For those of you that do not know, I’ve been given the privilege to be a speaker at SCOMathon later this month focusing on SCOM and how it relates to Cyber-Security. As I was putting my presentation together for that conference, one thing that came to mind was that the average attendee is going to be a typical IT professional and not someone versed on the details of IT Security. That’s not meant to be disparaging in any way, but it simply acknowledges that IT is a pretty big field and that security represents only one component of the job description.  While we would all agree that security is a focus perhaps even the primary focus, the number of compromises we’ve observed over the last decade tells us that this is not something we do well from a practical standpoint. In my opinion this is in large part because we as professionals don’t necessarily know where to start. Sure, we understand the basics of things such as “don’t click on links you don’t trust”, “reset default passwords”, and “patch your systems regularly”, but this only covers a small surface of what it means to operate in a secure manner. When it comes to topics of specific premises, the anatomy of an attack, vulnerabilities, exploits, and where to allocate IT resources, that discussion gets much more confusing. The meanings of specific terms can be blended sometimes and often times the problem is not the technology itself, but how we designed it.  And then there is a noise component. The sheer volume of opportunities than attacker can take advantage of is somewhat mindboggling, and of course with every exploit, there’s also someone willing to sell you something designed to mitigate those specific risks, whether the solution is something your organization needs or not. 

As such, I’m going to make an attempt with this multi-part series to take the complex subject of cyber-security and boil it down into something that can be a bit more useful for the average IT person. My hope in doing this will be to shed light on some common design flaws that are usually the root causes behind a typical breach, as good design can make up for many of the flaws that will inevitably be found in any technological solution.

While it will be somewhat boring, for this first part we’re going to start with some terms and key assumptions. I’ll be using these terms somewhat frequently across this series; and as such, I think it’s very important that we are all on the same page as to what they mean, or at least how I define them:

Tier Model: This is really a series of terms, but the reality is that all IT organizations are generally broken into 3 tiers. Organizations can go in a bit more depth with this if needs be, but for the most part, the 3 tier system is true for all organizations. This is in all of our formal documentation, so it might not be new, but it should be defined in case you’re not familiar with it.

  • Tier 0 – this tier is the god tier so to speak. Users who have access to this tier effectively own all of the information technology of an organization. Logically speaking, we recognize that these are our Domain Administrators, but we often forget that any system that touches a domain controller also qualifies in that sense. The reason for that is that if that system is compromised, your attacker is now a domain admin, even if they haven’t compromised a specific DA account. SCOM or SCCM, for instance, can be a Tier 0 system if the Microsoft Monitoring Agent/SCCM client is installed on a domain controller. If either of those systems were compromised, your domain controllers will be as well. This will be true for your antivirus and configuration management systems as well. If it touches a DC in any way, it’s a Tier 0 system because at the end of the day, an administrator of this system has direct control over the Tier 0 environment whether they are a domain administrator or not (note: this is why we recommend separate instances of these kinds of systems if they are going to manage domain controllers). Tier 0 typically represents the ultimate goal for an attacker. Once they have this tier, they own your environment because they can get to anything. As such, a lot of effort has been (rightfully) directed towards security Tier 0.
  • Tier 1 – this is your data tier. While getting to Tier 0 means an attacker now owns you, we need to remember that the typical attacker is actually interested in what is stored in this tier. You have trade secrets and personally identifiable information in this tier. This is where financial data is stored, as well as your email, intranet, management software, supply chain, etc.  Pretty much any system that your organization uses resides in this tier. While we will prioritize Tier 0 from a command and control standpoint, it’s worth noting that if an attacker has breached this tier, then they are already likely to have access to what they’ve come for. That said, breaching this tier means they may have less tools at their disposal than they would if they were domain admins and may still limit them in some capacity depending on what they’re after.
  • Tier 2 – this is your user tier. Effectively, it’s your desktop computing environment. To your average cyber professional, productivity machines are equivalent to the wild wild west. This tier has internet access and it has users. When an organization is breached, the breach usually starts here, and that’s in large part because your users are not IT Professionals and productivity machines are loaded with commercial applications that all have their own unique set of vulnerabilities, and as such the represent a weak link… which brings me to another term.
  • Crossing tiers (or credential bleed or poor credential hygiene) is what happens when a single system (such as SCOM for instance) is configured across multiple tiers. This also happens when you have an account from one tier being used on a system that is in another tier. I’ll simply note from a cyber perspective that this is very bad. This series will go into a lot more detail about that, but I’m going to content throughout this series that this remains the biggest vulnerability that most organizations possess, and until it is appropriately addressed, your organization is at risk of easy compromise, no matter what security posture you take.

PAW: PAW stands for Privilege Access Workstation. I’ll go into the setup a bit more detailed later on, but it is one of the main premises behind all of this in that it is a hardened workstation dedicated to administrative use. It has no business productivity applications (i.e. email) and it’s sole purpose is to keep tier administration within the same tier (i.e, you have Tier 0 paw dedicated to administering domain controllers. A Tier 1 PAW will only administer Tier 1 systems, etc.).

Assumed Breach: This is both an assumption and a term, but as a cyber professional, it’s critically important to recognize that we cannot prevent a breach at Tier 2.  No amount of user education is going to change the fact that someone is going to fall for that phishing scam, which currently represents the largest vector into your organization. Someone is going to visit a site on their work computer that they have no business visiting. Someone is going to download something from a non-trusted source. I’m not saying that education in these areas is a bad thing, but it’s worth noting that this should never be a primary defense. The bottom line is that the desktop is the most unpredictable device in the environment. Users range from technically competent to completely uneducated, and often times the more competent the user, the more dangerous they are. Setting the users aside, desktops also have a much larger attack surface. Unlike servers, desktops run dozens of productivity applications all of which have vulnerabilities that can be exploited. Speaking of which…

Vulnerabilities: A vulnerability is flaw. This can be a flaw due to design or inherent to the technology that has been implemented, and that distinction is something we need to recognize, as we often try and mitigate to the technical vulnerability and not the design vulnerability. A good example of a technical vulnerability is SQL. All SQL, whether Microsoft or another version of it, is vulnerable to a SQL injection attack due to the nature of structured query language.  As such this vulnerability has to be mitigated. Operating systems are vulnerable to pass the hash because single sign on allows them to store credentials. A good example of a design vulnerability is credential theft in a broader sense. If a higher level credential is being used on a lower level system, an attacker has numerous exploits at their disposal to acquire said credential. In my opinion design vulnerabilities are typically much more dangerous than technical vulnerabilities.  While this won’t be true every time, technical vulnerabilities can be patched, or the vendor will provide mitigation guidance. That doesn’t excuse poor coding by vendors or poor documentation (such as saying a service account must be a domain admin), but good architectural design can significantly mitigate technical vulnerabilities. Vulnerabilities related to a specific system will always exist, and while we should be patching and staying up to date with security practices as related to said system. However, we need to think about our design first.

Exploits: This is something the bad guys do to take advantage of a vulnerability. Pass the hash is an exploit that takes advantage of that stored credential on an operating system. Pass the ticket is similar to pass the hash except that the bad guy is stealing a Kerberos ticket instead. SQL Injection is an exploit that takes advantage of the way SQL processes statements, and the list goes on. There are a lot of exploits. One of the big mistakes we make on this is missing the forest for the trees. An organization cannot reasonably mitigate against every exploit. There are simply too many of them. To some extent we rely on our vendors. This is why we patch, but even the best patching strategy will still leave an attacker with a number of exploits at their disposal.

Risk: This is what the organization assumes with any vulnerability. Risk effectively amounts to a cost and should ultimately determine the spend in terms of design for security. If system X is compromised, what is the cost to the organization? Understand that risk can be eliminated, mitigated, or assumed. That’s also pretty straight forward. You can eliminate certain types of risk based on your design. In some cases, you’re only mitigating it because you’ve reduced your exposure to said risk in some way, but there is still risk to compromise. When you assume risk, you’re simply acknowledging that you cannot or will not fix it… and most importantly, it’s wroth noting that ignoring said risk is the same as assuming it. Too often, risk is ignored.

These terms aren’t necessarily exciting to the average IT professional, but do understand that when speaking to managers or C level individuals, these are the terms they care about. There is only a limited amount of dollars available in the org, and while security budgets are finally growing due to the cost of compromise. They aren’t limitless. Management is ultimately concerned about what it will cost to mitigate the risks vs the potential cost if they don’t.

My last point for this piece is a bit more than an assumption. It’s a fact and one that should not be forgotten.  Namely this. Once an attacker has control of a system, they can do pretty much anything they want to it. Keep this thought in mind when it comes to terms like assumed breach and solution design. Ultimately, the biggest flaws that an organization faces from their security posture is not their technology and tools, but ignoring this fact when they design their environment.

You can find part 2 here.

You can find part 3 here.

You can find part 4 here.