Turn CVSS + EPSS into a risk score that actually matches your environment.
Vulnerability Risk Model (VRM) 3.0 is a context-aware approach to vulnerability management. It blends technical severity, exploit probability, and enterprise business context to reduce noise, accelerate remediation, and focus effort on the vulnerabilities that matter most.
At the core of VRM 3.0 is a simple idea:
the same CVE on two different systems does not represent
the same risk. We express that with a model of the form
Criticality = Base × Risk + Context Modifier, where
Base is CVSS, Risk is EPSS (or equivalent), and
Context captures data sensitivity, system criticality,
OS lifecycle, vulnerability density, and age.
Why traditional severity models fall short
Most vulnerability programs still rely on static scores like CVSS or broad assumptions such as “our EDR will catch it.” Those signals are important, but on their own they don’t answer the questions that matter most to the business: How likely is this to be exploited? and What happens to our data, operations, and compliance posture if it is?
CVSS: impact without likelihood
CVSS describes how technically severe a vulnerability can be under generic conditions. It does not know which systems in your environment are internet-facing, store PHI or PII, or sit in the middle of critical business workflows. Thousands of “critical” findings can easily hide the handful that truly threaten the business.
EPSS: likelihood without business impact
EPSS brings data-driven exploit probability, answering, “How likely is this CVE to be exploited in the next 30 days?” But it is still generic: it doesn’t know where the vulnerable asset lives, what data it holds, or what would break if it goes offline.
Risk is context: Likelihood × Impact
Enterprise risk frameworks (ISO 31000, NIST, etc.) treat risk as a function of likelihood × impact. VRM 3.0 applies the same principle to vulnerability management by combining CVSS (impact), EPSS (likelihood), and asset context into a single, dynamic risk score.
No model is perfect: the importance of intelligent remapping
VRM 3.0 dramatically reduces noise and aligns prioritization with business risk, but no algorithmic model is flawless. Outliers will always exist. A mature vulnerability management program must include a review process to handle edge cases where the model doesn't capture the full picture.
When models miss critical context
Even the best-tuned model can't predict every scenario. Consider vulnerabilities like MS08-067 or Log4Shell, widespread, actively exploited, and devastating in impact. Yet EPSS might initially score them at 1-2% exploitation likelihood due to lag in threat intelligence.
Your internal security team knows your threat landscape. When a vulnerability is clearly high-risk despite a low VRM score, you need the ability to remap it: manually adjust the final criticality score to reflect reality.
What is remapping?
Remapping is the manual alteration of a vulnerability’s criticality after analysis by your internal security team. It overrides the calculated score when the model does not capture the full picture. VRM 3.0 supports two modes:
- Full override: Set a fixed criticality that replaces the VRM calculation entirely. Context modifiers are not applied. Use this when the team has high confidence in a specific risk level regardless of system context.
- Partial override: Set a new base criticality but continue to apply context modifiers on a system-by-system basis. Use this when the base score is wrong but business context still matters.
Document every override with rationale. Review remapped vulnerabilities periodically and feed lessons learned back into the model. The more you use it, the better it reflects how your organization actually sees and manages risk.
Vulnerabilities without CVEs
Not every finding from your security tools carries a CVE ID. Misconfigurations, internal application flaws, and some vendor-specific advisories may lack both CVSS and EPSS data. When either input is missing, VRM's base calculation (CVSS x EPSS) cannot run.
In these cases, fall back to the criticality assigned by the tool that discovered the finding. Because the EPSS multiplier is absent, these scores are typically higher than VRM-calculated results for equivalent technical severity. This is expected: without exploit-probability data to refine the score, the model errs on the side of caution.
Handling missing data
When CVSS or EPSS data is unavailable:
- Use tool-native severity as the starting criticality. Most scanners and DAST/SAST tools assign their own severity levels.
- Apply context modifiers from Stage 1 on top of the tool severity where possible. Business context still matters even without a CVE.
- Flag for internal analysis. These findings will often need static remapping (full or partial override) after review by your vulnerability management team.
As more data becomes available (a CVE is assigned, EPSS catches up), transition the finding to full VRM scoring.
VRM makes the impossible manageable
In 2025 alone, over 48,000 new CVEs were issued. No team can patch everything. Traditional approaches: treating all "critical" findings equally, or relying solely on vendor severity; this creates an unwinnable race.
VRM 3.0 doesn't promise perfection. It promises consumability: a model that narrows the field, highlights what truly matters, and empowers teams to focus their effort where it reduces real risk. With intelligent remapping for outliers, you get the best of both worlds: algorithmic efficiency and human judgment.
The Vulnerability Risk Model (VRM) 3.0
VRM 3.0 doesn’t replace CVSS or EPSS: it orchestrates them. It treats each as an input to a broader risk equation and layers on business context so that every vulnerability on every system gets a criticality score grounded in how your organization actually operates.
VRM 3.0 expresses risk as:
Criticality = Base × Risk + Context Modifier
- Base (CVSS): The technical severity of the vulnerability, typically the CVSS v3.x Base score (0-10). Where CVSS isn't available, a compatible base score (for example, from OWASP risk models) can be used instead.
- Risk (EPSS): The probability that the vulnerability will be exploited in the next 30 days, usually from EPSS (0-100%). Other threat-informed sources can be substituted when EPSS is not available.
-
Context Modifier: A set of additive
modifiers that reflect how this specific system is used.
A flat +1 per factor is suggested as a starting point
because it is simple to implement in most vulnerability
management platforms, but organizations can tailor these
values to their own risk tolerance. Common factors include:
- Presence of PHI, PII, or other sensitive data
- System business criticality and dependencies
- OS end-of-life status
- Vulnerability density on the asset
- Age of the vulnerability in your environment
The result is a dynamic score that changes as your assets, data, and threat landscape evolve; not a static label assigned the day the CVE was published.
Base Calc
+ Context
+ Age
CVSS × EPSS = Base Risk
The foundation of VRM 3.0. Simply multiply CVSS Base score by EPSS likelihood to get a risk-informed criticality score.
- Easy to implement: Most vulnerability management platforms already have CVSS and EPSS data available as attributes.
- No CMDB required: Works with data you already have from your vulnerability scanners.
- Immediate value: Start prioritizing based on likelihood, not just severity.
Context Modifiers
Layer in business context from your CMDB for each factor that applies. You don't need a perfect CMDB; use the data you have. Stage 1 builds on Stage 0 by adding business context.
Suggested starting point: add +1 per factor. This is the easiest approach to implement in most enterprise vulnerability management tools. Organizations can adjust these values to fit their environment, for example, increasing an entire criticality level for externally facing systems instead of a flat +1.
- +1 (suggested) if system processes PII, PHI, or sensitive data
- +1 (suggested) if system is externally facing
- +1 (suggested) if system runs on End-of-Life OS
- +1 (suggested) if system has high vulnerability density (5+ CVSS Base ≥ 5.0)
Transparency matters: Document why scores differ between systems. This incentivizes system owners to keep your CMDB accurate.
Age Modifiers
Add age of CVE as a modifier to create healthy pressure against "forever findings." Stage 2 builds on Stage 1 by adding time-based urgency.
- Start: +1 for every 10 years the CVE has existed in your environment
- Goal: Incrementally work toward +1 for every year
- Result: Long-lived vulnerabilities naturally rise in priority, preventing teams from ignoring persistent risk
Why VRM 3.0 adds two levels beyond "Critical"
In most organizations, "Critical" means wake someone up on a Saturday. IT teams hear "Critical" and think incident response, all-hands-on-deck, drop everything. But traditional vulnerability scoring uses "Critical" for findings that carry a 15-day SLA, hardly an emergency. That disconnect erodes trust: when everything is "critical," nothing is.
VRM 3.0 solves this by expanding the scale with two additional levels above Critical. This lets organizations map vulnerability urgency directly to their existing business-impact tiers without redefining what "Critical" means to IT operations.
Low (0 - 3.99)
Routine remediation. Address within standard patch cycles. Monitor for changes in the threat landscape that may elevate priority.
Suggested SLA: 90 days
Medium (4.0 - 6.99)
Elevated priority. Schedule remediation within the current patch window. Evaluate compensating controls if immediate patching is not feasible.
Suggested SLA: 60 days
High (7.0 - 8.99)
Urgent remediation required. Prioritize above routine work. Implement compensating controls immediately if patching requires an extended timeline.
Suggested SLA: 30 days
Critical (9.0 - 10.0)
Immediate action required. Escalate to system owners and security leadership. Deploy patches or compensating controls within 15 days. Document any exceptions.
Suggested SLA: 15 days
Patch ASAP (10.01 - 12.0)
Elevated critical finding. Actively coordinate remediation across teams. Implement mitigations within 2 business days while permanent fixes are deployed. Executive notification recommended.
Suggested SLA: Expedited
Patch NOW (12.01+)
Emergency response. Initiate incident response procedures. All available resources should be directed toward immediate remediation. Out-of-band patching authorized. Executive and stakeholder notification mandatory.
Suggested SLA: Emergency
Fluctuations: risk that moves with your business
Risk is not static. A vulnerability that looked harmless last quarter may become critical today because:
- An application starts processing PHI/PII due to a new feature or AI initiative.
- An OS transitions into end-of-life and loses vendor support.
- Threat actors begin actively targeting a previously obscure vulnerability, driving EPSS sharply upward.
VRM 3.0 embraces this reality. As context and threat signals change, so does the criticality score, keeping your prioritization aligned with the current risk landscape.
Rethinking SLAs for dynamic risk
Static SLAs assume severity never changes. VRM 3.0 scores fluctuate, so should your SLA clock. Instead of fixed deadlines, let the clock tick faster at higher criticality:
- Critical (1.0x): Clock runs at full speed: 1 calendar day = 1 SLA day
- High (0.5x): Clock runs at half speed: 2 calendar days = 1 SLA day
- Medium (0.25x): Clock runs at quarter speed: 4 calendar days = 1 SLA day
- Low (0.1x): Clock barely moves: 10 calendar days = 1 SLA day
Set a universal threshold (e.g., 20 SLA days). A vulnerability that fluctuates between Medium and Critical accumulates time proportionally, no retroactive punishment, no benefit to waiting for scores to drop.
Try it: Use the SLA Clock Calculator to model both time-weighted clocks and grace-period approaches for your team. The VRM Calculator also shows traditional static SLA benchmarks for comparison.
VRM 3.0 encourages healthy vulnerability management habits
VRM 3.0 is not just better math; it’s a way to build healthier, more sustainable behaviors in security, IT, and application teams. The model is intentionally designed to reward the practices you want more of: good inventory, clean architectures, and prompt closure of long-lived risk.
Less noise, more meaningful action
By focusing on likelihood, impact, and context, VRM 3.0 helps teams move away from “patch everything” toward reduce meaningful risk.
- Fewer “critical” findings that aren’t truly urgent
- Clearer guidance on what to fix first on each asset
- Stronger alignment with business outcomes like data protection, operational continuity, and compliance (HIPAA, PCI DSS, FedRAMP, and more)
System owners are no longer overwhelmed by static dashboards; they see a prioritized roadmap of what actually matters.
Good habits, built into the model
The context modifiers in VRM 3.0 and beyond are chosen to nudge organizations toward mature behaviors:
- Data classification: Knowing where PHI, PII, and other sensitive data lives becomes directly relevant to vulnerability risk.
- Asset and application inventory: You can’t assign meaningful criticality without knowing what a system does and who depends on it.
- Lifecycle management: End-of-life OS and high vulnerability density increase risk, reinforcing the value of modernization and rationalization.
- Timely remediation: Age-based modifiers in Stage 2 make it increasingly expensive to ignore issues that linger year after year.
Over time, these incentives help teams work smarter, not just harder, by spending energy where it meaningfully reduces risk.
Ready to implement VRM 3.0?
Start with Stage 0 today. Most vulnerability management platforms already have CVSS and EPSS data available; you just need to multiply them together. Stage 1 adds suggested context modifiers from your CMDB (use what you have, and adjust the values to fit your environment), and Stage 2 introduces age-based modifiers for mature teams.
Follow-on guidance for Stage 1 (context modifiers) and Stage 2 (age modifiers) will be published as organizations adopt and refine their implementations.