The Risk Model

In the last month I have talked to a number of companies looking into performing a Risk Assessment.  Yet the typical sales response was to perform a vulnerability scan or penetration testing.  Technical vulnerability is only part of an organization’s vulnerability.  Furthermore, vulnerability is only part of risk.

Risk Venn Diagram

What is risk? It occurs when three basic elements exist at the same time: vulnerability, opportunity and threat. All three of these elements must be present in order for there to be a risk. We can mitigate the risk by simply pulling one of these three elements out of the equation. Strangely, when I do speeches today, I see an audience that is rushing to write down the risk Venn diagram. The reason is, too many organizations today focus entirely on complying with some standard or performing patch management rather than on actually securing their systems. Somewhere in creating certifications and checklists, the theory of risk was lost.

For example, before the introduction of encrypted drives you could mount and access a local disk without a password, as long as you had physical access to the machine. One reason for this was to enable someone to reset the root password if they forgot it. This functionality, of course, introduced the risk that anyone could boot a system, reset the root password and take full control. So how do you mitigate the risk? Remember, to take advantage of this feature you needed to have physical access to the machine. If you simply remove all physical access to the system for everyone except trusted personnel, you remove the opportunity for someone to take advantage of it. When you remove the opportunity (or any of the three risk elements), you mitigate the risk itself.

In the security profession we always say mitigate the risk, not remove the risk. Risk never truly goes away. Attackers do not just perform a single action; they manipulate an entire situation to try and change the circumstances that mitigated the risk. They will change the environment, modify their threat, or search for new vulnerabilities. If you pay close attention to user-side attacks, each step manipulates the mitigations until the final attack strikes.

The risk model is a good way to view the evolution of security. It also allows us to organize our tools to understand what they are doing in our infrastructure. Security products have moved from opportunity to vulnerability to threat-based solutions. This change has taken more than forty years.

Early computer security focused on limiting the opportunities available to the attacker. Systems were protected by preventing physical access to them. This involved the use of end-to-end encryption and authenticated remote dial-ins when the communications travelled through unprotected areas.

However, once the Internet was introduced, we had to give at least some access to outside entities. As soon as we did so, we introduced an opportunity for risk. To reduce this risk, we implemented firewalls. When the firewall was first created, it was intended to do more than simply reduce the number of services that were externally accessible. Steve Bellovin, co-author of “Firewalls and Internet Security: Repelling the Wily Hacker” has a good explanation for the purpose of firewalls. His logic is that all code has bugs. Bugs in code that performs security functions leads to security vulnerabilities. Since services that are externally accessible have a significant quantity of code, the firewall acts as an intermediary. It operates with a significantly smaller code base, which can be reviewed more diligently. By reducing both the access to the code base and the quantity of code to secure, you are reducing the opportunity for someone to exploit a vulnerability in that code base.

Along Came Satan: the Switch to Vulnerability-Based Security

Cheswick and Bellovin wrote their book in 1994, but the change in security was already starting. In1993, Dan Farmer and Wietse Venema released a landmark paper, Improving the Security of Your Site by Breaking Into It. It was accompanied with the announcement of Security Administrator Tool for Analyzing Networks (SATAN), the first graphical vulnerability scanner. The idea was to mimic a hacker. By scanning your own network with the same tools as a hacker, you should be able to find systems that need patching and proactively fix them. In theory, this removes vulnerability from the network and mitigates the risk. This marked a shift from opportunity-defense to vulnerability-defense. To this day, the focus on vulnerability-based security has been the predominant form of security. It has created vulnerability scanning, patch management, vulnerability announcements, and vulnerability-based signatures used by IDS and IPS.

While the concept of vulnerability-based security sounds good, there is a major flaw in this approach. Not all vulnerabilities are in services, or even in code. There are over 12,000 application vulnerabilities recorded in a given year according to the National Vulnerability Database, which is a far greater number then are capable of being scanned remotely. Also, there are many other types of vulnerabilities: uneducated users, mis-configurations, zero-day vulnerabilities, and general design flaws. In general, there are many more vulnerabilities than anyone could secure. Because of this, organizations who focus heavily on this model seem to be in a perpetual state of catch-up with regards to mitigating the risk in their environment.

Web Filtering: the Start of Threat Based Security

In 1998, a company called N2H2 started categorizing blacklists in order to allow organizations to filter by content. They were acquired by Secure Computing in 2003 and merged with other filtering technology, becoming first Web Washer and later evolving into McAfee’s Global Threat Intelligence (GTI). This acquisition strategy was not the opportunity or vulnerability-based security that the industry loved. At the time, IDS/IPS vendors made a significant effort to downplay the validity of threat-based security.

Threat-based security begins by defining a characteristic of an attack. This could be the attacker, their network address or site name, the files they use, or even some individual code sections. When this information has been collected, a foreign key (a designator that can be cross-referenced) is created that allows for the identification of each threat object, ultimately creating a blacklist. To improve the accuracy of threat-based detection, a foreign-key index can also be created for all the acceptable objects, also known as a whitelist.

Threat-based solutions have spawned a whole new industry of reputation-based security products. Web filtering is not the only technology using this. Antivirus companies started using reputation to increase detection and filtering rates, and a whole new generation of security tools have been created. There are tools now that run files in sandbox and track the results of the execution. These tools are called dynamic scanners. They take the results of the file being executed and generate reputation based on them. We have even gotten to the point now in the industry where organizations feel comfortable filtering traffic based on threat.

Summary

The risk model is a core tool of the trade for security professionals. It helps us look at risk and determine different ways to address the problem. Using the risk model, we can see different pushes in the industry, favoring one mitigation technique over another. However, regardless of the changes in technique, the risk model itself holds true.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: