Nov 24, 2020
As we examined in our previous article “Predicting Vulnerabilities in Compiled Code”, there are over 1,000 vulnerabilities that are discovered every month. Hackers find a way to exploit these vulnerabilities, and the amount of monetary damage resulting from cyber crime keeps growing.
As we learned, to fully protect ourselves from cyber attacks, we need to keep developing tools that will help us find all the vulnerabilities that exist and identify them before the attacker does.
This huge increase of vulnerabilities is directly related to the amount of patches being released. IT teams and DevOps are collapsing under the overwhelming numbers they need to handle, causing patching gaps that hackers abuse. Patching holds a great risk for the company, whether through downtime (planned or not), business process corruption, or other disruptions to stable activity. The world of application security and patching is evolving, as mentioned in our blog “Sealing the Patch Gap”. Still, the number of potential mitigation measures are not proportional to the resources a given company has.
Ideally, the best practice would be to patch or protect every vulnerable software that exists in an organization. But that is impossible given the current state of vulnerability proliferation and the limited capacity of the security teams. We need a smart approach to tackle this problem. We need to develop tools that will help us calculate the risk of each vulnerability, so we can focus on mitigating the highest risk first.
The industry standard measure of the severity of a software vulnerability is “The Common Vulnerability Scoring System” (CVSS) published by first. The CVSS scores range from 0-10 and allow us to compare vulnerability risk and prioritize mitigation actions.
The CVSS is composed of three metrics: base, which represents the vulnerability characteristics that are constant over time and common across user environments; temporal, which reflects the characteristics of a vulnerability that change over time; and environmental, which represents the characteristics of a vulnerability that are unique to a user's environment. (see figure 1)
The base metric components are categorized into three sub-groups: exploitability, impact, and scope.
The ease and technical means by which the vulnerability can be exploited.
Attack Vector - reflects the methods that are needed to execute the vulnerability exploitation: is it remotely exploitable, limited at the protocol level, requires accessing the target system, relies on user interaction, or requires physical interaction.
Assuming that the number of potential attackers for a vulnerability that could be exploited remotely over the network is larger than that of which requires physical access to a device, the remote exploitation will be reflected with a higher base score.
Attack Complexity - describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. For example, does the exploit require collecting target information in order for it to run? The less complicated the attack, the greater score it will receive.
Privileges Required - In order to execute the attack, and prior to the execution, should the attacker possess admin, local user, or no privileges? The score is higher if no privileges are required.
User Interaction - Is user interaction required for the attack execution? If not, the score is higher.
The impact group reflects the direct impact of a successful exploit.
Confidentiality - refers to the ability to access and disclose sensitive information from a successful exploit. For example, an attacker steals the administrator's password. The greater the impact, the higher the score.
Integrity - refers to the ability to change information due to a successful exploit. For example, the attacker is able to modify files. The greater the impact, the higher the score.
Availability - refers to the ability to control the availability of the components as a result from a successful exploit. For example, denying connections to a networked service (e.g., web, database, email). The greater the impact, the higher the score.
The scope metric captures whether or not the exploit will affect resources in components that are beyond the impacted security scope. Affecting other scopes will result in a higher score.
The National Vulnerability Database (NVD) provides CVSS base scores for almost all known vulnerabilities. You can search per CVE number and receive ongoing data feeds that include the calculated base score and the parameters of the vector.
The basic method of prioritizing vulnerabilities is to start with the highest CVSS base score found in an organization. As we saw, the base score consists only of a vulnerability’s static characteristics that don’t change over time and aren't influenced by the environment. This method is fast and easy, but it is the least accurate and far from being the best practice.
The temporal group reflects the current state of a vulnerability - was an exploit found, is a patch published, etc.
Exploit Code Maturity - a measure of the likelihood of the vulnerability to be attacked. It is based on the current state of exploit techniques, exploit code availability, or active, “in-the-wild” exploitation. Public availability of easy-to-use exploit code increases the number of potential attackers by including those who are unskilled, thereby increasing the severity of the vulnerability.
Remediation Level - a measure of the availability of remediation for the vulnerability. Usually a patch doesn’t exist when a new vulnerability is discovered. Later, some workarounds might be published. The best remediation is an official, permanent fix. The lack of remediation actions increases the vulnerability’s score.
Report Confidence - a measure of the confidence in the existence of the vulnerability. Vulnerabilities are validated when detailed reports exist, functional reproduction/exploitation data is published, or the vendor confirms the existence of the vulnerability. The higher the validation in the vulnerability existence, the higher the score.
As we mentioned, the temporal score reflects the current state of each vulnerability, and therefore is constantly changing. Unlike the base score, this data is not organized and offered to the public, so organizations need to gather it on their own.
Exploit Code Maturity - Wide exploit existence and its ease of use can be gathered by searching in different exploit DB like Exploit DataBase. Metasploit can be useful for identifying the maturity and existence of an exploit to a vulnerability.
Remediation Level - Patch or new version availability can be gathered by searching the vendors’ official websites.
Report Confidence - Can be gathered by NLP methods over security report and feeds and from exploit db’s description.
To review, the base score reflects the static characteristics that do not alter between different environments and assets. The purpose of the environmental metric is to base the vulnerability score not only on static characteristics but also on the organization’s configuration and to express the impact of an exploit from the organizational point of view.
The goal of these fields is to better understand the impact of an exploit on a specific asset. The terms of impact are, as in the base score, confidentiality, integrity, and availability.
Confidentiality is the impact as a result of an access and disclosure of sensitive information. Integrity reflects the impact as a result of a change of information. And availability measures the impact to an organization as a result of controlling the availability of the component.
For example, for an asset that supports a business function where the availability of it is the most important, using the base score to prioritize might result in a higher risk to a vulnerability with a high integrity impact. Adjusting a higher score relative to confidentiality and integrity will adjust the vulnerability risk to better prioritize the vulnerability based on the impact on different assets in case of exploit.
Modified Base Metrics
The base score metric fields can be adjusted to better reflect the specifications and configurations per asset in an organization. For example, assuming a vulnerability requires admin privileges in order to be used, but on a specific asset the user admin can be accessed without a password, adjusting the “Privileges Required” field from high to none will result in a higher risk from the specific vulnerability in this specific asset and would better reflect the risk of exploitation.
As we said earlier, the easiest, but not the most accurate, method to prioritize is to use the CVSS base score. Different tools today help organizations gather the information needed for the temporal and environmental metrics, providing them with a risk-based methodology that is specific for the organization.
Some tools use the CVSS scores as a base metric but leverage it with new characteristics. For example:
Exploitability Likelihood/Time to Exploit - Different machine learning research suggest models to predict if and when a new vulnerability will be exploited. The models use different data sources such as the CVSS vector and publicity in twitter. For example: predict when a vulnerability will be exploited based on twitter, FastEmbed: predicting vulnerability exploitation possibility based on ensemble machine learning algorithm, predicting exploitation of disclosed software vulnerabilities using open-source data.
Vulnerability Publicity - time and count of references in twitter and security feeds.
Exploit Weaponization - if the exploit is known to be used in cyber attacks.
Another prioritization method that was published in 2019 is the Exploit Prediction Scoring System (EPSS). This data-driven framework predicts the probability that a vulnerability will be exploited in the wild within the first twelve months after public disclosure. The prediction is based on features such as vendor, count references of the CVE, attack vector (via remote or local access only), exploit maturity, and attack impact (denial of service, memory corruption).
Unlike the tools that leverage the CVSS score by adding characteristics, this tool’s goal is to replace it with a new methodology for prioritizing mitigation efforts. Instead of prioritization that is based on the risk and impact of a vulnerability, they suggest to prioritize with the probability of exploitation.
All of the prioritization methods discussed above analyze vulnerability risk and characteristics from a single point of view. Even the advanced methods that integrate the environmental characteristics, such as asset configurations and impact, are limited to the effect a single vulnerability can cause.
There is no doubt that analyzing the characteristics is important and necessary, but in order to better prioritize and paint a more accurate picture, we should integrate an additional layer where we observe the vulnerability from the surrounding viewpoint. This change in perspective will allow us to identify how a single vulnerability or a group of vulnerabilities are reflected in a software. By separating and observing each user and how he or she is using the software, we can then aggregate all the user data to better analyze the asset.
For example, a vulnerability that requires administrative privileges and user interaction is more likely to be exploited if integrated in a high usage software that is running with an admin user than if integrated in a software that is running in the background with a non-admin user. Because the methods that we discussed don’t integrate the actual usage, the two softwares will be ranked with the same risk score, when clearly their risk should differ.
Characteristics related to the surrounding point of view can be categorized into three groups: app usage characteristics, user running the app, and the asset that the application is running on. After gathering the information and integrating it with the vulnerabilities’ characteristics, we will receive the most accurate prioritization method.
After gathering both the vulnerability and surrounding characteristics, combining the data enables us to provide the most accurate prioritization method.
Not only do we understand the risk—the attack vector, the exploitation and mitigation level that exists, and the potential impact—we also know which software is running with the vulnerability, which other vulnerabilities are aggregated to it, who is running this software and how often, the platform it’s running on, and the other softwares (that might contain vulnerabilities too) that are running on the same platform.
Using the data we collected, we can now provide more accurate insights on the risk of a vulnerability and therefore better prioritize the mitigation.
For example, we can now differentiate and rank higher vulnerabilities that require a specific port to be exploited and are using this port from those who don’t. Furthermore, we can identify software with a medium vulnerability risk that is installed on many assets, is used often by many different users, and has an official patch published, and rank it higher in the mitigation actions list than it would have been without integrating the surrounding characteristics.
The number of vulnerabilities keeps growing, but the security team's human capacity is limited; unfortunately, they cannot patch every one with limited resources. Therefore, it is key to prioritize the mitigation action and focus on the highest risk.
Every organization can easily use the CVSS base score as a method to prioritize its mitigation actions. The downside of using an inaccurate method is the time investment in mitigating high base score vulnerabilities that cannot be exploited in the organization, while missing the pertinent mitigation of low base score vulnerabilities that pose a higher risk because of their specific, contextualized activity.
Vicarius’s TOPIA enables organizations to identify vulnerabilities and threats that pose the most harm to their environment. Analyzing the vulnerability characteristics (internal, exploitability, and impact), and integrating them with the surrounding characteristics, provides the organization with clear and accurate prioritization for mitigating risk.
Shani and Roi return to the show, talk shop, and host an interactive game!
Our Path to Product-Led GrowthMichael Assraf May 24, 2022
OSINT Basics – What is OSINT and Why Do We Do/Need OSINT?Nikola Kundacina May 22, 2022
What is OS Fingerprinting?Kent Weigle May 16, 2022
John the Ripper Pt.4Nikola Kundacina May 16, 2022
John the Ripper Pt. 3Nikola Kundacina May 09, 2022