In Part 2, I shared how I use OSSTMM and RAV to give each asset a risk score. It’s a solid way to get a sense of what’s exposed and where the weak spots are. But a number by itself doesn’t tell the full story so I need to know what that score means compared to everything else. That’s where normalization and benchmarking come in.
And because risk changes all the time, I also like to run scenarios to see how the scores shift when things change. So let me walk you through how I use these techniques in real life.
Making Scores Comparable
I deal with all kinds of assets ranging from cloud apps, on-prem servers, endpoints, you name it. So I can’t just look at raw scores. I normalize everything to make fair comparisons.
Here’s what I use:
- Z-scores: Helps me see how far an asset is from the average risk level
- Min-Max scaling: Puts everything on a 0–100 scale which is much easier to explain
- Group-based scoring: I break things into categories (cloud, app, network, etc.) and normalize within each group
This way, a score of 80 on a cloud app and 80 on a printer don’t mean the same thing unless they’re in the same risk context.
Benchmarking: Checking How We Stack Up
I don’t just want to compare assets. I want to compare teams, departments, and even companies.
- I rank departments based on their average asset risk
- I see which teams are improving (or slipping)
- I look at open datasets or research reports when available
- I try to align with frameworks like NIST CSF or CIS where I can
The idea is to get a sense of whether we’re ahead of the curve or lagging behind.
Playing the “What-If” Game
This part is actually kind of fun. I use scenario modeling to test how changes would affect our risk scores.
Things I like to simulate:
- What if we rolled out MFA to all users?
- What if a zero-day hit one of our major platforms?
- What if we shut down an old, risky system?
Tools I use:
- Monte Carlo simulation: I run tons of variations to get a realistic range of outcomes (refer here if you curious how Monte Carlo works or you can read the simplest Monte Carlo explanation in this post.)
- FAIR framework: Helps me translate scores into potential financial impact (great for exec conversations). The FAIR framework (Factor Analysis of Information Risk) is one of the most powerful and respected models for turning cyber risk into financial terms, something CFO or board will understand much more clearly than technical metrics like CVSS or raw risk scores.
These simulations help me plan ahead instead of just reacting when something breaks.
What Is a Monte Carlo Simulation and How I Use It For Cyber Exposure Modelling?
Monte Carlo simulation is a method that uses random sampling to simulate and predict possible outcomes when there’s uncertainty. Instead of relying on a single guess or average, it runs thousands (or millions) of scenarios based on different inputs which gives you a more realistic view of what could happen.
Think of it like this: instead of saying “we might get breached once a year,” you simulate thousands of versions of the year, with different events happening each time based on known probabilities.
When I apply Monte Carlo simulations to exposure modeling, I’m trying to answer questions like:
- How likely is it that this asset will be attacked?
- If it is, what are the chances it gets exploited?
- What’s the range of possible financial or operational impacts?
Here's how I break it down:
1. Define Variables with Uncertainty
For example, I might set:
- Likelihood of attack: maybe between 10% and 70%, depending on threat intel
- Chance of exploit success: maybe 30% to 80%, depending on known vulnerabilities
- Cost of breach: maybe RM50,000 to RM500,000 depending on asset value and data exposure
Instead of picking one number, I define a range or probability distribution for each.
2. Run the Simulation
Using Python or tools like @RISK, I run 10,000+ simulations. Each one picks a random value from each distribution and calculates the outcome.
Each iteration gives me a possible version of reality:
- One simulation might say: “No attack this year, cost RM0”
- Another might say: “Attack happened, exploit worked, cost RM230,000”
- Another: “Attack failed due to MFA, cost RM5,000”
3. Analyze the Results
I don’t get one answer but I get a distribution of possible outcomes:
- 90% chance the cost is below RM100,000
- 10% chance it’s above RM300,000
- Median risk exposure: RM80,000
Now I can quantify uncertainty and say things like:
“There’s a 1-in-10 chance this vulnerability could cost us over RM300k so let’s fix it before it does.”
How I Use FAIR with My Risk Scores
Let’s say I already have an Attack Surface Score (ASS) like 130 out of 200 for a cloud system. That score tells me the system is risky, but it doesn’t say how much that risk might cost. Here’s how FAIR fills in that gap.
Step 1: Map My Score to Threat Event Frequency
- A high score (e.g. 130) might mean this asset is highly exposed, suggesting threat events are likely — maybe 10 to 15 per year
- A low score (e.g. 60) suggests maybe 1 or 2 attempts annually
FAIR calls this Threat Event Frequency (TEF)
Step 2: Estimate Loss Event Frequency
Not all attempts are successful. So I ask:
- How likely is a threat to succeed (based on exploitability, defenses, etc.)?
This gives me Loss Event Frequency (LEF):
TEF × Probability of Success = LEF
Step 3: Estimate Loss Magnitude
Now I look at what would happen if it succeeds:
- Would we lose data?
- Face fines?
- Need to notify customers?
- Halt operations?
These give me Primary and Secondary Loss Magnitudes.
For example:
- Data breach of 100K records = RM250,000
- Regulatory fine = RM50,000
- Reputation impact = RM80,000
Total = RM380,000 per event
Step 4: Multiply It Out
Now I can calculate Expected Annual Loss:
Risk = LEF × Loss Magnitude
Say:
- LEF = 4 successful breaches/year
- Loss Magnitude = RM380,000
Expected Loss = RM1.52 million/year
Now I’ve turned a vague “high risk” score into a clear financial statement. Here’s how this really changes the game:
Without FAIR | With FAIR |
---|---|
“This server is high risk” | “This could cost us RM1.5M/year” |
“We should patch this soon” | “Spending RM50K now avoids RM1.5M” |
“Security is important” | “Security is ROI-positive” |
Visuals That Help Me Tell the Story
I’ve learned that visuals go a long way in making risk scores useful.
- Heatmaps: I use these to spot high-risk clusters across business units
- Radar charts: These help show score breakdowns (visibility, access, trust, control)
- Trends over time: I plot scores monthly to see what’s improving
Teaser for my upcoming VoronScout project
Real-Life Example: Picking Priorities
Let’s say I’ve got two systems:
- A marketing SaaS platform scoring 88/100
- A finance on-prem server scoring 73/100
Finance sounds more important, right? But in this case, the SaaS platform is wide open and used by lots of external users. That higher score tells me it’s more exposed right now, so I tackle that one first.
Why This Works for Me
With normalization, benchmarking, and modeling, I’ve gone from looking at raw numbers to making smarter, faster calls:
- I can defend my priorities
- I can show leadership what’s really at stake
- I can stay ahead of risk instead of always chasing it
What’s Next
Stay tuned for Part 4 where I will show how I actually act on all this by fixing what’s broken, deciding when to accept risk, and keeping everything up to date. It’s where the numbers finally lead to action.
Post a Comment
0Comments