Measuring Security Champion program effectiveness
"How do we know the security program is working?" This question comes from leadership, and it deserves a real answer. Vague responses like "we're more secure" don't cut it. You need concrete metrics that show progress, identify problems, and justify continued investment.
This chapter covers what to measure, how to measure it, building dashboards that tell a story, and reporting to leadership in a way that gets attention and support.
Why measurement matters
Without metrics, you're flying blind:
For you:
- Can't tell if initiatives are working
- Don't know where to focus effort
- Can't prioritize effectively
- No way to show progress
For leadership:
- Can't justify security spending
- Can't compare to industry benchmarks
- No visibility into risk posture
- Security feels like a black box
For the team:
- No goals to work toward
- No recognition for improvement
- No feedback on what matters
- Security feels thankless
The right metrics make security visible, measurable, and manageable.
What makes a good security metric
Not all metrics are useful. Good security metrics are:
| Characteristic | Why it matters | Example |
|---|---|---|
| Actionable | You can do something about it | "% of systems patched within SLA" (can improve) vs. "number of attacks attempted" (can't control) |
| Measurable | You can actually count it | "Mean time to remediate critical vulns" (measurable) vs. "security culture" (vague) |
| Relevant | Connects to real risk | "% of employees with MFA" (protects against account compromise) vs. "security training hours" (activity, not outcome) |
| Comparable | Can track over time | Same definition each quarter allows trend analysis |
| Understandable | Leadership can grasp it | "3 critical vulnerabilities in production" vs. "CVSS aggregate score of 7.2" |
Vanity metrics vs. real metrics
Vanity metrics look good but don't drive decisions:
- Number of security tools deployed
- Total phishing emails blocked
- Hours of security training delivered
- Number of policies written
Real metrics show actual security posture:
- Percentage of critical vulnerabilities fixed within SLA
- Phishing simulation click rate (and trend)
- Time to detect and respond to incidents
- Percentage of systems with current patches
Leading vs. lagging indicators
This distinction matters more than most Security Champions realize.
Lagging indicators tell you what already happened:
- Number of security incidents
- Breaches detected
- Vulnerabilities found in production
- Compliance audit failures
Leading indicators predict future problems:
- Percentage of projects with threat modeling
- Security test coverage in CI/CD
- Developers trained in secure coding
- Third-party risk assessments completed
- Security requirements in new project specs
| Type | Example | Why it matters |
|---|---|---|
| Lagging | 5 incidents this quarter | Shows past problems, but too late to prevent them |
| Leading | 80% of PRs have security review | Predicts fewer incidents next quarter |
| Lagging | 12 vulns found in production | Damage already possible |
| Leading | 95% of vulns caught in CI/CD | Problems stopped before production |
Expert insight: Most organizations measure only lagging indicators. They're easy to count but tell you about yesterday. Leading indicators require more effort to define and track, but they're the ones that actually drive improvement.
A balanced scorecard includes both:
- Lagging: to measure outcomes and prove impact
- Leading: to predict trends and course-correct early
Core metrics for Security Champions
Start with these foundational metrics. They're measurable, actionable, and meaningful to leadership.
1. Vulnerability management metrics
| Metric | What it measures | Target | How to collect |
|---|---|---|---|
| Open critical/high vulnerabilities | Current risk exposure | 0 critical, fewer than 10 high | Vulnerability scanner, issue tracker |
| Mean time to remediate (MTTR) | How fast you fix issues | Under 7 days critical, under 30 days high | Track from discovery to closure |
| Patch compliance rate | Systems up to date | >95% within SLA | Patch management tool |
| Vulnerability density | Issues per codebase size | Decreasing trend | SAST tools (findings per KLOC) |
| Age of oldest vulnerability | Neglected issues | Under 30 days for critical | Issue tracker query |
How to calculate MTTR:
MTTR = Sum of (Closure Date - Discovery Date) for all vulns / Number of vulns
Example:
- Vuln A: Discovered Jan 1, Fixed Jan 5 = 4 days
- Vuln B: Discovered Jan 3, Fixed Jan 10 = 7 days
- Vuln C: Discovered Jan 5, Fixed Jan 8 = 3 days
MTTR = (4 + 7 + 3) / 3 = 4.67 days
2. Access and authentication metrics
| Metric | What it measures | Target | How to collect |
|---|---|---|---|
| MFA adoption rate | Account protection | 100% for critical systems | Identity provider admin console |
| Accounts with MFA | Overall MFA coverage | >95% all accounts | IdP dashboard |
| Privileged account count | Admin sprawl | Minimum necessary | IAM audit |
| Orphaned accounts | Access cleanup | 0 | Regular access reviews |
| Failed login attempts | Potential attacks | Baseline + alerts for spikes | Auth logs |
MFA tracking dashboard example:
MFA Adoption by Service — March 2025
Service Users With MFA Rate
─────────────────────────────────────────────
Google Workspace 45 45 100%
GitHub 32 32 100%
AWS Console 12 12 100%
Slack 45 43 96%
Salesforce 18 15 83% ← Action needed
Jira 38 38 100%
─────────────────────────────────────────────
Total 190 185 97%
3. Incident metrics
| Metric | What it measures | Target | How to collect |
|---|---|---|---|
| Number of incidents | Volume of problems | Trend down over time | Incident tracker |
| Mean time to detect (MTTD) | How fast you notice | Under 1 hour for critical | Incident logs |
| Mean time to respond (MTTR) | How fast you act | Under 4 hours for critical | Incident logs |
| Mean time to resolve | Full remediation | Under 24 hours for critical | Incident logs |
| Incidents by severity | Risk distribution | Fewer critical over time | Incident tracker |
| Incidents by category | Pattern identification | Identifies focus areas | Incident tracker |
Incident trends example:
Incidents by Severity — 2024
Q1 Q2 Q3 Q4 Trend
── ── ── ── ─────
Critical 2 1 1 0 ↓
High 5 4 3 2 ↓
Medium 8 10 7 5 ↓
Low 12 15 11 8 ↓
── ── ── ──
Total 27 30 22 15 ↓
Notes:
- Q2 spike due to Log4j-related incidents
- Q4 improvement reflects patch automation
4. Training and awareness metrics
| Metric | What it measures | Target | How to collect |
|---|---|---|---|
| Training completion rate | Compliance | 100% for mandatory | LMS or tracking sheet |
| Phishing simulation click rate | Awareness effectiveness | Under 5%, trending down | Phishing simulation tool |
| Phishing report rate | Employee vigilance | >50% of simulated phishes | Phishing tool |
| Security questions asked | Engagement | Trend up | Slack channel analytics |
| Policy acknowledgment rate | Policy awareness | 100% | HR system or form |
Phishing simulation tracking:
Phishing Simulation Results — 2024
Campaign Sent Clicked Rate Reported Report Rate
────────────────────────────────────────────────────────────────────
Jan: Fake DocuSign 45 8 18% 12 27%
Mar: IT Password 45 5 11% 18 40%
May: Delivery 45 4 9% 22 49%
Jul: LinkedIn 45 3 7% 25 56%
Sep: Invoice 45 2 4% 28 62%
Nov: Holiday 45 2 4% 30 67%
────────────────────────────────────────────────────────────────────
Trend ↓ 14 pts ↑ 40 pts
Goal: Click rate under 5%, Report rate over 50% ✓
5. Security hygiene metrics
| Metric | What it measures | Target | How to collect |
|---|---|---|---|
| Endpoint protection coverage | Device security | 100% | EDR/AV dashboard |
| Encryption at rest | Data protection | 100% for sensitive data | Cloud console audit |
| Backup success rate | Recovery capability | >99% | Backup system logs |
| Secret scanning alerts | Exposed credentials | 0 unaddressed | GitHub/GitLab security tab |
| Dependency vulnerabilities | Supply chain risk | 0 critical, under 10 high | Dependabot, Snyk |
Building a security dashboard
A dashboard turns raw metrics into a story. The right dashboard answers "how are we doing?" at a glance.
Dashboard design principles
1. Audience-appropriate:
- Executive dashboard: 5-7 high-level metrics
- Security team dashboard: Detailed operational metrics
- Developer dashboard: Code and dependency metrics
2. Trend-focused:
- Show change over time, not just current state
- Use arrows, colors, or sparklines for trends
- Compare to previous period
3. Actionable:
- Link to details for investigation
- Highlight what needs attention
- Make next steps obvious
4. Honest:
- Don't hide bad news
- Show context for anomalies
- Avoid cherry-picking good metrics
Executive dashboard template
Security Dashboard — March 2025 · Overall status: GOOD (8/10 metrics on target)
Critical metrics:
| Metric | Value | Target | Status |
|---|---|---|---|
| Open critical vulns | 0 | 0 | ✓ OK |
| MFA adoption | 97% | 95% | ✓ OK |
| Phishing click rate | 4% | under 5% | ✓ OK |
| MTTR critical | 2.5d | under 7d | ✓ OK |
| Salesforce MFA | 83% | 100% | ⚠ Action needed |
| Training completion | 88% | 100% | ⚠ Action needed |
Trends vs. last quarter: Security incidents 15 → 12 (−20%) · Vulnerabilities fixed 45 → 52 (+16%) · Mean time to remediate 8d → 5d (−38%)
Key accomplishments: Deployed automated dependency scanning · Achieved 100% MFA for all admin accounts · Reduced critical vuln MTTR from 8 to 2.5 days
Next quarter focus: Complete Salesforce MFA rollout · Achieve 100% training compliance · Implement container scanning in CI/CD
Dashboard tools
| Tool | Best for | Cost | Effort |
|---|---|---|---|
| Google Sheets/Excel | Quick start, simple metrics | Free | Low |
| Notion | Visual, easy sharing | Free tier | Low |
| Google Data Studio | Multiple data sources | Free | Medium |
| Grafana | Technical metrics, time series | Free (self-hosted) | Medium |
| Datadog | Integration with monitoring | Paid | Medium |
| Splunk | Enterprise, complex queries | Paid | High |
| Custom app | Specific needs | Dev time | High |
For small companies: Start with a spreadsheet. Seriously. A well-maintained Google Sheet beats an unmaintained Grafana instance.
Data collection automation
Manual data collection doesn't scale. Automate where possible:
# Example: Weekly metrics collection script
name: Weekly Security Metrics
on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 9 AM
jobs:
collect-metrics:
runs-on: ubuntu-latest
steps:
- name: Get vulnerability count
run: |
# Query GitHub Security tab
gh api repos/$ORG/$REPO/vulnerability-alerts --jq 'length'
- name: Get Dependabot alerts
run: |
gh api repos/$ORG/$REPO/dependabot/alerts --jq '[.[] | select(.state=="open")] | length'
- name: Get MFA status
run: |
# Query Google Workspace Admin API or IdP
- name: Post to Slack
run: |
curl -X POST $SLACK_WEBHOOK -d '{
"text": "Weekly Security Metrics: ..."
}'
Reporting to leadership
Metrics are useless if leadership doesn't see them. Regular reporting keeps security visible and demonstrates value.
Report frequency
| Report | Frequency | Audience | Content |
|---|---|---|---|
| Dashboard | Real-time | Security team | All metrics, detailed |
| Weekly summary | Weekly | Security + IT leads | Key changes, alerts |
| Monthly report | Monthly | Department heads | Trends, focus areas |
| Quarterly review | Quarterly | Executive team | Strategic view, achievements |
| Annual report | Annually | Board (if applicable) | Year in review, roadmap |
Quarterly report template
# Security Quarterly Report — Q1 2025
## Executive Summary
Q1 was a strong quarter for security. We closed 52 vulnerabilities,
reduced our incident count by 20%, and achieved our MFA target for
critical systems. Two areas need attention: Salesforce MFA rollout
and training completion.
## Key Metrics
### Security Posture
| Metric | Q4 2024 | Q1 2025 | Target | Status |
|--------|---------|---------|--------|--------|
| Open Critical Vulns | 2 | 0 | 0 | Met |
| Open High Vulns | 12 | 8 | under 10 | Met |
| MTTR (Critical) | 8 days | 2.5 days | under 7 days | Met |
| MFA Adoption | 94% | 97% | 95% | Met |
| Phishing Click Rate | 7% | 4% | under 5% | Met |
### Trend Analysis
[Include sparklines or simple charts showing 4-quarter trends]
## Accomplishments
### Completed initiatives
1. **Automated dependency scanning** — Now running on all repositories,
catching vulnerable packages before deployment
2. **Admin MFA enforcement** — 100% of admin accounts now require
hardware security keys
3. **Incident response improvement** — Reduced critical incident
MTTR from 8 to 2.5 days through updated runbooks
### Metrics improvements
- Closed 52 vulnerabilities (up from 45 last quarter)
- Security incidents down 20% (15 vs. 19)
- Phishing resilience improved: 4% click rate (down from 7%)
## Incidents
### Summary
- Total incidents: 15 (down from 19)
- Critical: 0 | High: 2 | Medium: 5 | Low: 8
### Notable incidents
1. **Compromised contractor account (High)** — February 12
- Detected within 2 hours, contained within 4 hours
- No customer data accessed
- Root cause: Shared credentials; fixed by enforcing individual accounts
2. **Phishing campaign targeting finance (Medium)** — March 5
- 3 employees clicked, 8 reported
- No credential compromise
- Response: Additional training for finance team
## Risk Areas
### Issues requiring attention
1. **Salesforce MFA (Medium priority)**
- Current: 83% adoption
- Blocker: Some users on legacy mobile devices
- Plan: Device upgrade for 3 remaining users by April 15
2. **Training completion (Medium priority)**
- Current: 88% complete
- Blocker: New hires in onboarding queue
- Plan: Complete by April 30
### Emerging risks
- Increased targeting of our industry (per threat intel)
- New dependency vulnerabilities in React ecosystem
## Resource Utilization
| Initiative | Planned hours | Actual hours | Notes |
|------------|--------------|--------------|-------|
| Vulnerability remediation | 80 | 75 | On track |
| Security training | 20 | 25 | Additional sessions requested |
| Incident response | 40 | 35 | Fewer incidents |
| Policy updates | 20 | 15 | Completed early |
| Tool evaluation | 20 | 30 | Extended eval for SIEM |
## Next Quarter Plan
### Q2 2025 Priorities
1. **Complete MFA rollout** — 100% coverage including Salesforce
2. **Container security** — Implement image scanning in CI/CD
3. **Security Champions expansion** — Train 2 additional champions
4. **Tabletop exercise** — Quarterly IR practice
### Upcoming milestones
- April 15: Salesforce MFA complete
- April 30: Training 100% complete
- May 15: Container scanning deployed
- June 1: New Security Champions onboarded
## Budget
| Category | Q1 Budget | Q1 Actual | Variance |
|----------|-----------|-----------|----------|
| Tools | $5,000 | $4,500 | -$500 |
| Training | $2,000 | $2,200 | +$200 |
| Contractor | $3,000 | $2,500 | -$500 |
| Certifications | $1,000 | $800 | -$200 |
| **Total** | **$11,000** | **$10,000** | **-$1,000** |
## Recommendations
1. **Approve budget for SIEM pilot** — $3,000/quarter for 90-day eval
2. **Add security requirements to vendor contracts** — Legal review needed
3. **Consider bug bounty program** — Research for Q3 implementation
## Appendix
### A. Full metrics table
[Detailed metrics data]
### B. Incident list
[All incidents with classification]
### C. Tool inventory
[Current security tools and status]
Presenting to executives
When you present to leadership:
1. Lead with the headline: "Security is in good shape. Two items need attention."
Not: "Let me walk you through our 47 metrics..."
2. Use traffic light colors:
- Green: On target
- Yellow: Needs attention
- Red: Critical issue
3. Show trends, not just numbers: "Phishing clicks dropped from 18% to 4% over the year" tells a story. "4% phishing click rate" is just a number.
4. Connect to business impact:
- "This vulnerability could have exposed customer data"
- "MFA prevents the #1 cause of breaches in our industry"
- "Faster incident response = less downtime = less revenue loss"
5. Be honest about gaps: Leadership respects honesty. Hiding problems erodes trust.
6. Come with solutions: For every problem, have a plan. "Salesforce MFA is at 83%. We'll be at 100% by April 15."
7. Ask for what you need: Make the ask clear. "I need approval for a $3,000 SIEM pilot."
Handling tough questions
| Question | How to answer |
|---|---|
| "Are we secure?" | "No organization is 100% secure. We're managing our top risks effectively. Here's evidence: [metrics]" |
| "How do we compare to others?" | "Industry benchmarks show [X]. We're at [Y], which is [above/below/at] average for companies our size." |
| "Why do we need more budget?" | "Current investment gives us [X coverage]. Additional budget would address [specific gap] which represents [risk]." |
| "Can you guarantee no breaches?" | "No one can guarantee that. We can reduce likelihood and impact. Here's what we're doing: [specifics]" |
| "What's our biggest risk?" | "[Specific risk] because [reason]. We're addressing it by [action]." |
Setting targets and goals
Metrics need targets. Without goals, numbers are just numbers.
How to set realistic targets
1. Baseline first: Measure current state before setting targets. You can't improve what you don't know.
2. Industry benchmarks: Look at what similar companies achieve:
- Phishing click rates: 5-15% is typical, under 5% is good
- MTTR for critical vulns: 7-30 days is typical, under 7 is good
- MFA adoption: 85-95% is typical, aim for 100%
3. Consider constraints:
- Team size
- Budget
- Technical debt
- Competing priorities
4. Incremental improvement: If you're at 15% phishing clicks, don't target 2%. Aim for 10% first.
Industry benchmarks: where to find them
Don't set targets in a vacuum. Use industry data:
| Source | What it covers | Link | Update frequency |
|---|---|---|---|
| Verizon DBIR | Breach patterns, attack vectors, industry trends | verizon.com/dbir | Annual |
| SANS Security Awareness Report | Phishing rates, training effectiveness | sans.org/security-awareness-training | Annual |
| Ponemon Institute | Cost of breach, time to detect/contain | ponemon.org | Annual |
| Mandiant M-Trends | Dwell time, attack techniques | mandiant.com/m-trends | Annual |
| CrowdStrike Global Threat Report | Attack trends, breakout time | crowdstrike.com/reports | Annual |
| KnowBe4 Benchmarking Report | Phishing prone percentage by industry | knowbe4.com/phishing-benchmarking | Annual |
Key benchmarks to know (2024 data):
| Metric | Industry average | Good | Excellent |
|---|---|---|---|
| Phishing click rate (untrained) | 30-35% | under 15% | under 5% |
| Phishing click rate (trained) | 10-15% | under 5% | under 2% |
| MTTD (mean time to detect) | 197 days | under 30 days | under 7 days |
| MTTC (mean time to contain) | 69 days | under 14 days | under 3 days |
| Patch compliance (critical, 7 days) | 60% | >85% | >95% |
| MFA adoption | 60-70% | >90% | 100% |
Sources: Verizon DBIR 2024, Ponemon Cost of Data Breach 2024, KnowBe4 2024
Example: Annual security goals
## Security Goals — 2025
### Vulnerability Management
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| Zero critical vulns | 2 avg | 0 | Ongoing |
| MTTR critical | 8 days | 3 days | Q2 |
| MTTR high | 21 days | 14 days | Q3 |
| Patch compliance | 85% | 95% | Q2 |
### Access Management
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| MFA adoption | 94% | 100% | Q1 |
| Privileged accounts | 25 | 15 | Q2 |
| Access review cycle | Annual | Quarterly | Q2 |
### Awareness
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| Phishing click rate | 8% | 5% | Q2 |
| Training completion | 75% | 100% | Q1 |
| Report rate | 40% | 60% | Q3 |
### Incident Response
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| MTTD | 4 hours | 1 hour | Q4 |
| MTTR | 6 hours | 2 hours | Q4 |
| Tabletop exercises | 1/year | 4/year | Ongoing |
Talking money: risk-based metrics
Leadership speaks the language of money. Translating security into financial terms makes your case stronger.
The FAIR methodology (simplified)
FAIR (Factor Analysis of Information Risk) provides a framework for quantifying risk in dollars. The full methodology is complex, but you can use the core concepts:
Risk ($) = Probability of Event × Impact of Event
Example: Credential stuffing attack
- Probability: 25% per year (based on industry data, our exposure)
- Impact: $200,000 (incident response, downtime, reputation)
- Annual risk: 0.25 × $200,000 = $50,000 expected loss
If MFA costs $5,000/year and reduces probability to 2%:
- New risk: 0.02 × $200,000 = $4,000
- Risk reduction: $50,000 - $4,000 = $46,000
- ROI: ($46,000 - $5,000) / $5,000 = 820%
Quick risk quantification template
For each major risk, estimate:
| Factor | How to estimate | Example |
|---|---|---|
| Likelihood | Industry data + your controls | 25% per year |
| Primary loss | Direct costs: IR, recovery, fines | $50,000 |
| Secondary loss | Reputation, customer loss, legal | $150,000 |
| Total impact | Primary + Secondary | $200,000 |
| Expected loss | Likelihood × Impact | $50,000/year |
Cost of breach data points
Use these for your calculations (adjust for your company size):
| Cost category | Small company (under 100 emp) | Mid-size (100-1000) | Source |
|---|---|---|---|
| Average breach cost | $120,000-$250,000 | $500,000-$2M | Ponemon 2024 |
| Cost per record (PII) | $150-$180 | $150-$180 | Ponemon 2024 |
| Ransomware average | $150,000-$300,000 | $500,000-$1.5M | Coveware 2024 |
| Business email compromise | $50,000-$150,000 | $150,000-$500,000 | FBI IC3 2023 |
| Downtime cost/hour | $1,000-$10,000 | $10,000-$100,000 | Varies by industry |
Calculating Security ROI
When requesting budget, show the math:
Security Investment ROI Calculator
Initiative: Implement SAST in CI/CD
Cost: $15,000/year (tool + setup time)
Without SAST:
- Vulns found in production: ~20/year
- Average fix cost in production: $5,000
- Total: $100,000/year + breach risk
With SAST:
- Vulns caught pre-production: 18/20 (90%)
- Vulns reaching production: 2/year
- Average fix cost in development: $500
- Production fixes: 2 × $5,000 = $10,000
- Pre-production fixes: 18 × $500 = $9,000
- Total: $19,000
Savings: $100,000 - $19,000 = $81,000
ROI: ($81,000 - $15,000) / $15,000 = 440%
Plus: Reduced breach probability (harder to quantify but real)
Expert tip: Don't over-engineer these calculations. Order of magnitude is enough. "$50K to $100K risk" is useful. "$73,847.23 risk" is false precision that undermines credibility.
Security maturity model
Where is your organization on the maturity curve? This helps set realistic goals and communicate progress.
Five levels of security maturity
| Level | Name | Characteristics |
|---|---|---|
| 5 | Optimizing | Continuous improvement, predictive analytics, security enables business innovation |
| 4 | Managed | Metrics-driven, quantitative risk management, security integrated into all processes |
| 3 | Defined | Documented processes, consistent practices, security policies enforced, regular training |
| 2 | Developing | Some processes exist, reactive security, basic controls in place, ad-hoc training |
| 1 | Initial | No formal security program, chaos mode, react to incidents as they happen |
Maturity assessment quick check
| Area | Level 1 | Level 2 | Level 3 | Level 4-5 |
|---|---|---|---|---|
| Vulnerability management | Fix when exploited | Scan occasionally | Regular scans, SLAs | Automated, risk-based |
| Access control | Shared passwords | Individual accounts | MFA, access reviews | Zero trust, JIT access |
| Incident response | Panic mode | Basic runbooks | Practiced IR plan | Automated response |
| Training | None | One-time onboarding | Annual training | Continuous, role-based |
| Metrics | None | After incidents | Basic dashboard | Predictive analytics |
| Security in SDLC | None | Final testing only | Integrated scanning | Shift-left, threat modeling |
Metrics by maturity level
Different maturity levels need different metrics:
Level 1-2 (Getting started):
- Do we have basic controls? (yes/no checklist)
- Are critical systems protected? (binary)
- Can we detect obvious attacks? (test results)
Level 2-3 (Building foundation):
- Vulnerability counts and trends
- MFA adoption percentage
- Training completion rate
- Incident count and severity
Level 3-4 (Maturing):
- MTTR, MTTD trends
- Risk-weighted vulnerability scores
- Leading indicators (security in design)
- Cost per vulnerability remediated
Level 4-5 (Optimizing):
- Predictive risk scores
- Security ROI by initiative
- Business-aligned risk metrics
- Benchmark comparisons
Metrics gaming: what to watch for
Any metric that becomes a target will be gamed. Anticipate this and design countermeasures.
Common gaming patterns
| Gaming behavior | How it looks | The real problem |
|---|---|---|
| Severity downgrade | "This is Medium, not High" | Critical vulns slip through SLA |
| Won't fix epidemic | MTTR looks great, vulns still exist | Risk not actually reduced |
| Easy wins focus | 50 Low vulns closed, 2 Critical ignored | Wrong priorities |
| Denominator manipulation | "Only 5 systems need patches" (we excluded 50) | Coverage gaps hidden |
| Timing games | Vuln discovered on April 1, logged April 5 | MTTD looks better than reality |
| Definition drift | "That wasn't an incident, just an event" | Incident count looks good, learning lost |
How to prevent gaming
-
Multiple correlated metrics: If MTTR is gamed by closing as "won't fix," also track "% of vulns accepted as risk" and require VP approval for risk acceptance.
-
Outcome metrics alongside activity metrics: Track both "vulns closed" and "vulns found in production over time."
-
Random audits: Periodically review a sample of closed vulns. Were they really fixed? Was severity accurate?
-
Trend anomaly detection: Sudden improvements should trigger investigation. Did something really change, or did the measurement?
-
Third-party validation: Periodic pentests reveal what your own metrics might hide.
Expert insight: The goal isn't to catch people gaming metrics—it's to design metrics that are hard to game while still measuring what matters. When gaming happens, it often reveals perverse incentives you didn't intend to create. Fix the incentives, not just the behavior.
Real stories: when metrics made a difference
Story 1: The phishing chart that got budget
A Security Champion at a 200-person company tracked phishing simulation results for six months. The first simulation: 28% click rate. After monthly simulations and targeted training: 4% click rate.
When presenting to the CEO, she didn't talk about "security awareness." She showed:
- "28% of employees would have given credentials to attackers"
- "Now it's 4%—a 7x improvement"
- "The industry average breach involving phishing costs $150K"
- "We reduced that risk for $3,000 in training investment"
Result: CEO approved budget for a proper security awareness platform and praised the initiative at the all-hands meeting.
Story 2: The MTTR that revealed a broken process
A company tracked MTTR for vulnerabilities and was proud of their 5-day average. Then a Security Champion dug deeper and found:
- Critical vulns: 3 days (good)
- High vulns: 8 days (okay)
- Medium vulns: 45 days (hidden by the average)
The averaging masked that medium vulnerabilities were being ignored. Once exposed, the team implemented SLAs by severity and the real issue got addressed.
Story 3: The missing leading indicator
A startup measured incidents and vulnerabilities but nothing about their development process. They had zero incidents (good!) but also:
- No security in code reviews
- No SAST or dependency scanning
- No security training for developers
A new Security Champion added leading indicators:
- % of repos with automated security scanning: 0%
- % of developers with security training: 12%
- % of PRs with security review: 0%
This data made a compelling case that their low incident count was luck, not security. Six months later, after adding these practices, they caught their first critical vulnerability in CI/CD—before it reached production.
Story 4: When lack of metrics caused disaster
A mid-size company had no security metrics. Leadership assumed "no news is good news." Then came a ransomware attack:
- No visibility into patch status — 40% of systems were months behind
- No access inventory — couldn't identify compromised accounts
- No incident metrics — no baseline for "normal" detection time
- Recovery took 3 weeks; cost: $800K
Post-incident, the CEO's first question: "Why didn't we know we were vulnerable?" The answer: "We didn't measure it."
Storytelling: how to present metrics effectively
Data doesn't speak for itself. You need to tell a story.
The pyramid principle
Structure your presentation like a pyramid:
- Top: The conclusion — Start with the answer
- Middle: Key supporting points — 3-4 main reasons
- Bottom: Details — Data that supports each point
BAD structure:
"Let me walk you through our 47 metrics, and at the end
I'll tell you what it means..."
GOOD structure:
"Our security posture is strong, with two areas needing
attention. Here's why I say that: [3 points with data]"
The "So what?" test
For every metric, answer "So what?" before leadership asks:
| Metric | So what? |
|---|---|
| "We have 12 high vulnerabilities" | "Each one could lead to a breach. Here's our plan to close them by [date]." |
| "Phishing clicks dropped to 4%" | "That's 7x better than when we started, and below industry average. Our investment is paying off." |
| "MTTR improved from 8 to 3 days" | "We're fixing critical issues before attackers can exploit them. Faster than 80% of companies our size." |
Visualization that works
| For this data | Use this | Why |
|---|---|---|
| Trend over time | Line chart | Shows direction and velocity |
| Current status vs. target | Gauge or progress bar | Instantly shows gap |
| Comparison across categories | Horizontal bar chart | Easy to compare |
| Multiple metrics summary | Traffic light table | Quick status scan |
| Before/after | Two numbers side by side | Clear improvement story |
Timing your report
When you present matters:
- Monday morning: Low energy, high distraction. Avoid.
- Friday afternoon: Everyone mentally checked out. Avoid.
- After a security incident: High attention, but reactive mindset.
- Budget planning season: Perfect timing for investment asks.
- After a positive milestone: Build on momentum.
Expert tip: Schedule regular time (monthly or quarterly) rather than ad-hoc updates. Consistency builds credibility.
Handling skepticism
| Objection | Response |
|---|---|
| "These numbers seem too good" | "Here's the raw data and methodology. I'd rather under-claim than over-promise." |
| "Can we trust this data?" | "We cross-validate with [source]. Here's how we collect it." |
| "This seems complicated" | "The summary is simple: [one sentence]. Details are in appendix." |
| "Other priorities are more urgent" | "Understood. Here's the risk of deferring, quantified: [$$]" |
Common mistakes
- Measuring everything — Too many metrics creates noise; focus on what matters
- Vanity metrics — "Security training hours" doesn't equal "employees are secure"
- Infrequent updates — Quarterly metrics with no updates between creates surprise
- No context — Raw numbers without benchmarks or trends are meaningless
- Hiding bad news — Leadership finds out eventually; better from you
- Manual collection only — Can't scale; automate what you can
- No targets — Numbers without goals don't drive improvement
- Too technical — CVSS scores mean nothing to the CEO
- Not connecting to business — "So what?" is a valid question
- Beautiful dashboards, no action — Metrics should drive decisions
Workshop: create your security dashboard
Part 1: Define your metrics (1 hour)
-
Identify key metrics:
- Pick 5-7 metrics from the lists above
- Ensure coverage: vulnerabilities, access, incidents, awareness
- Choose metrics you can actually measure now
-
Set baselines:
- Collect current state for each metric
- Document how you're collecting data
- Note any gaps in data availability
-
Set targets:
- Research benchmarks for your industry
- Set realistic improvement goals
- Define timeline for each target
Part 2: Build the dashboard (2 hours)
-
Create structure:
- Use Google Sheets, Notion, or your tool of choice
- Build executive summary view
- Build detailed metrics view
-
Populate with data:
- Enter current metrics
- Add historical data if available
- Create trend visualizations
-
Add automation:
- Identify data sources
- Set up queries or integrations where possible
- Document manual collection steps
Part 3: Write quarterly report (1 hour)
-
Use the template:
- Customize for your company
- Fill in current quarter data
- Add narrative for key points
-
Review and refine:
- Have someone else read it
- Check for jargon
- Verify all numbers
Artifacts to produce
After this workshop, you should have:
- List of 5-7 key security metrics
- Baseline measurements for each metric
- Targets and timelines for each metric
- Executive dashboard (spreadsheet or tool)
- Detailed metrics tracking sheet
- First quarterly report draft
- Data collection schedule/automation plan
Self-check questions
- What's the difference between a vanity metric and a real metric?
- Why should you show trends rather than just current numbers?
- What are the five core metric categories for Security Champions?
- How often should you report to executive leadership?
- What should you do when metrics show bad news?
- How do you set realistic security targets?
- Why is automation important for metrics collection?
- What should an executive dashboard include?
- How do you connect security metrics to business impact?
- What's the first step before setting improvement targets?
How to explain this to leadership
The pitch: "I want to create a simple dashboard that shows our security status at a glance. You'll see where we're strong, where we need work, and how we're improving. No surprises — you'll always know where we stand."
What you need:
- Access to security tool dashboards (vulnerability scanner, IdP, etc.)
- 2-3 hours per month for data collection and reporting
- 30 minutes quarterly with leadership for review
The value:
- Visibility into security investments
- Early warning of problems
- Evidence for audit/compliance
- Data for strategic decisions
First deliverable: "I'll have a baseline dashboard ready in 2 weeks. After that, monthly updates with a quarterly review."
Conclusion
Measure to improve, not to report. A dashboard nobody looks at is just overhead. Three metrics that drive action every month are worth more than twenty metrics that get presented once a quarter and forgotten.
Start small. A spreadsheet is fine. The habit of tracking matters more than the tool you use to track it.
What's next
This completes the security culture section. Next: advanced topics and long-term strategy — risk management, compliance, vendor security, and scaling the security program.