LVSS: A framework to prioritize vulnerabilities

In this post I present a framework to prioritize vulnerabilities and how urgently one needs to be fixed. The motivation is that I find scoring systems such as CVSS only showing one part of the picture, and that prioritization methods such as <impact x likelihood> being too subjective to scale in a company and to be answered consistently by different people. To be clear, I am not trying to replace CVSS, but to complement it. To reflect the local nature of this scoring system as opposed to CVSS's global nature, I will call this framework Local Vulnerability Scoring System, or LVSS.

The framework

The framework is as follows: For each pair of malicious actor and sensitive resource that you care about, a Local Vulnerability Score (LVS) is to be calculated as:

LVS = # of hops + # of detectors

where a lower score is more urgent/severe.

I will now explain in detail. An actor is the subject that is performing an access. It can be a real human (stranger or employee) or a service. A resource can be a server, a database, a network, customer data, or PII that you hold. While you can calculate an LVS for each individual resource, it is useful to group them based on type or classification, and start from the more important resources. Similarly, you can group actors by roles or department, with the general public being another group. The need for pairs of actor and resource will quickly generate many pairs to track if you do not group as much as possible.

The number of hops is the number of steps for the specified actor to get to the specified resource in the system. Exploiting the vulnerability is one step, but many times additional steps are needed if defense-in-depth is practiced. For example, if a vulnerability is only present in an internal system that is accessible only to admins, and that vulnerability leads to sensitive data, then 2 hops are needed for a non-admin actor: one to become the admin, then the second to exploit the vulnerability.

The number of detectors is the number of "tripwires" that would alert between the specified actor and the specific resource, including but not exclusively any alerts that would be triggered in the course of exploiting the vulnerability. In the example above, a suitable alert is when an actor becomes an admin, independent of the approval process to become admin. Another suitable alert is a tripwire that would be triggered should the vulnerability be exploited. Note that the detectors here assume a timely and capable incident response function, as logging alone is insufficient without a timely response.

A reasonable question is whether this framework can be gamed, leading to padding numbers for the sake of numbers. I believe it is aligned with meaningful security. To game this framework, one needs to either add more hops between an actor and a resource, which are not likely to be exploitable by the same vulnerability therefore leads to more work by an attacker, or add more alerts which will help detect an attacker on their way to the resource. At the end of the day, this framework incentivizes improvement by putting the actor logically farther away from the resource, or by adding meaningful detection points that are monitored.

An example

Let's walk through an example scenario with log4shell. Log4shell is a vulnerability in a number of log4j versions that allows a party who can trigger a log message to perform arbitrary code execution on the log processing server, by specially-crafted log messages.

Let's say log4shell is present in two places in your company: a public-facing web application and an internal/employee-facing web application. Let's further assume that the internal web application is directly exploitable and, through remote code execution, directly gives an attacker enough privileges to connect to a database with customer data, considered a resource of interest (let's call it DB1). Let's say the public-facing web application contains nothing interesting (no resources of interest), but is hosted on the same network as DB1, though not normally connected and does not have credentials into DB1. Lastly, let's assume no alerts are set up. Let us now calculate the relevant scores:

For {employee, DB1} pair, the LVS is 1+0=1 via the internal web application. There is only one thing to exploit, which is the vulnerability itself, and exploiting it will then give access to the internal-facing web application server which is already set up to have a direct access to DB1.

For {general public, DB1} pair, the LVS is 2+0=2 via the public-facing web application. It is insufficient to just exploit the public-facing web application, but the attacker will also need to move laterally within the network to DB1. Assuming they are already on the same network, then moving into DB1 likely consists of guessing DB1's credentials or bypassing it somehow.

(Employees can also pose as the general public and try to exploit the vulnerability through the public-facing application. Similarly, the general public can also try to impersonate as an employee and authenticate into the internal web application. In this example these two scenarios are no worse than the ones considered above so I will omit them.)

Given the two scores above, it is more imperative to address the internal web application, because the {employee, DB1} pair results in the lowest score. One way to do so is by adding alerts: if employees can only access their web application over VPN, and if all VPN traffic is examined in near-real-time, then an alert could be written to detect exploit payloads in VPN traffic. On the other hand, the general public has two hops to get to DB1, and since only one of the hops is exploitable (this should be confirmed based on the impact of the vulnerability), we can depend on the additional layer of defense while we solve the more urgent LVS=1 case.

It is also possible to extend this exercise to calculate both a pre- and a post-vulnerability scores, which would more accurately report the state of multiple layers of defense if the vulnerability is exploitable at multiple hops.

Future work

Conceptually, a vulnerability is a failure in access control. By ensuring additional points of access control between an attacker and a resource, or by "lighting up" a failed path by way of alerts and tripwires, we compensate for individual failures and make it hard for a resource to be accessed after accounting for everything else.

One area I would like to explore more is applying access control models to formalize and reason vulnerabilities. For example, if you have a system that is fully represented by an RBAC model, can you model a vulnerability simply as changing some evaluation functions to be "allow", and let the model re-calculate and report what, if any, end-to-end policies have changed? This would allow vulnerabilities to be incorporated into access control, as special policies.