Skip to main content

Measuring Security Champion program effectiveness

"How do we know the security program is working?" This question comes from leadership, and it deserves a real answer. Vague responses like "we're more secure" don't cut it. You need concrete metrics that show progress, identify problems, and justify continued investment.

This chapter covers what to measure, how to measure it, building dashboards that tell a story, and reporting to leadership in a way that gets attention and support.

Why measurement matters

Without metrics, you're flying blind:

For you:

  • Can't tell if initiatives are working
  • Don't know where to focus effort
  • Can't prioritize effectively
  • No way to show progress

For leadership:

  • Can't justify security spending
  • Can't compare to industry benchmarks
  • No visibility into risk posture
  • Security feels like a black box

For the team:

  • No goals to work toward
  • No recognition for improvement
  • No feedback on what matters
  • Security feels thankless

The right metrics make security visible, measurable, and manageable.

What makes a good security metric

Not all metrics are useful. Good security metrics are:

CharacteristicWhy it mattersExample
ActionableYou can do something about it"% of systems patched within SLA" (can improve) vs. "number of attacks attempted" (can't control)
MeasurableYou can actually count it"Mean time to remediate critical vulns" (measurable) vs. "security culture" (vague)
RelevantConnects to real risk"% of employees with MFA" (protects against account compromise) vs. "security training hours" (activity, not outcome)
ComparableCan track over timeSame definition each quarter allows trend analysis
UnderstandableLeadership can grasp it"3 critical vulnerabilities in production" vs. "CVSS aggregate score of 7.2"

Vanity metrics vs. real metrics

Vanity metrics look good but don't drive decisions:

  • Number of security tools deployed
  • Total phishing emails blocked
  • Hours of security training delivered
  • Number of policies written

Real metrics show actual security posture:

  • Percentage of critical vulnerabilities fixed within SLA
  • Phishing simulation click rate (and trend)
  • Time to detect and respond to incidents
  • Percentage of systems with current patches

Leading vs. lagging indicators

This distinction matters more than most Security Champions realize.

Lagging indicators tell you what already happened:

  • Number of security incidents
  • Breaches detected
  • Vulnerabilities found in production
  • Compliance audit failures

Leading indicators predict future problems:

  • Percentage of projects with threat modeling
  • Security test coverage in CI/CD
  • Developers trained in secure coding
  • Third-party risk assessments completed
  • Security requirements in new project specs
TypeExampleWhy it matters
Lagging5 incidents this quarterShows past problems, but too late to prevent them
Leading80% of PRs have security reviewPredicts fewer incidents next quarter
Lagging12 vulns found in productionDamage already possible
Leading95% of vulns caught in CI/CDProblems stopped before production

Expert insight: Most organizations measure only lagging indicators. They're easy to count but tell you about yesterday. Leading indicators require more effort to define and track, but they're the ones that actually drive improvement.

A balanced scorecard includes both:

  • Lagging: to measure outcomes and prove impact
  • Leading: to predict trends and course-correct early

Core metrics for Security Champions

Start with these foundational metrics. They're measurable, actionable, and meaningful to leadership.

1. Vulnerability management metrics

MetricWhat it measuresTargetHow to collect
Open critical/high vulnerabilitiesCurrent risk exposure0 critical, fewer than 10 highVulnerability scanner, issue tracker
Mean time to remediate (MTTR)How fast you fix issuesUnder 7 days critical, under 30 days highTrack from discovery to closure
Patch compliance rateSystems up to date>95% within SLAPatch management tool
Vulnerability densityIssues per codebase sizeDecreasing trendSAST tools (findings per KLOC)
Age of oldest vulnerabilityNeglected issuesUnder 30 days for criticalIssue tracker query

How to calculate MTTR:

MTTR = Sum of (Closure Date - Discovery Date) for all vulns / Number of vulns

Example:
- Vuln A: Discovered Jan 1, Fixed Jan 5 = 4 days
- Vuln B: Discovered Jan 3, Fixed Jan 10 = 7 days
- Vuln C: Discovered Jan 5, Fixed Jan 8 = 3 days

MTTR = (4 + 7 + 3) / 3 = 4.67 days

2. Access and authentication metrics

MetricWhat it measuresTargetHow to collect
MFA adoption rateAccount protection100% for critical systemsIdentity provider admin console
Accounts with MFAOverall MFA coverage>95% all accountsIdP dashboard
Privileged account countAdmin sprawlMinimum necessaryIAM audit
Orphaned accountsAccess cleanup0Regular access reviews
Failed login attemptsPotential attacksBaseline + alerts for spikesAuth logs

MFA tracking dashboard example:

MFA Adoption by Service — March 2025

Service Users With MFA Rate
─────────────────────────────────────────────
Google Workspace 45 45 100%
GitHub 32 32 100%
AWS Console 12 12 100%
Slack 45 43 96%
Salesforce 18 15 83% ← Action needed
Jira 38 38 100%
─────────────────────────────────────────────
Total 190 185 97%

3. Incident metrics

MetricWhat it measuresTargetHow to collect
Number of incidentsVolume of problemsTrend down over timeIncident tracker
Mean time to detect (MTTD)How fast you noticeUnder 1 hour for criticalIncident logs
Mean time to respond (MTTR)How fast you actUnder 4 hours for criticalIncident logs
Mean time to resolveFull remediationUnder 24 hours for criticalIncident logs
Incidents by severityRisk distributionFewer critical over timeIncident tracker
Incidents by categoryPattern identificationIdentifies focus areasIncident tracker

Incident trends example:

Incidents by Severity — 2024

Q1 Q2 Q3 Q4 Trend
── ── ── ── ─────
Critical 2 1 1 0 ↓
High 5 4 3 2 ↓
Medium 8 10 7 5 ↓
Low 12 15 11 8 ↓
── ── ── ──
Total 27 30 22 15 ↓

Notes:
- Q2 spike due to Log4j-related incidents
- Q4 improvement reflects patch automation

4. Training and awareness metrics

MetricWhat it measuresTargetHow to collect
Training completion rateCompliance100% for mandatoryLMS or tracking sheet
Phishing simulation click rateAwareness effectivenessUnder 5%, trending downPhishing simulation tool
Phishing report rateEmployee vigilance>50% of simulated phishesPhishing tool
Security questions askedEngagementTrend upSlack channel analytics
Policy acknowledgment ratePolicy awareness100%HR system or form

Phishing simulation tracking:

Phishing Simulation Results — 2024

Campaign Sent Clicked Rate Reported Report Rate
────────────────────────────────────────────────────────────────────
Jan: Fake DocuSign 45 8 18% 12 27%
Mar: IT Password 45 5 11% 18 40%
May: Delivery 45 4 9% 22 49%
Jul: LinkedIn 45 3 7% 25 56%
Sep: Invoice 45 2 4% 28 62%
Nov: Holiday 45 2 4% 30 67%
────────────────────────────────────────────────────────────────────
Trend ↓ 14 pts ↑ 40 pts

Goal: Click rate under 5%, Report rate over 50% ✓

5. Security hygiene metrics

MetricWhat it measuresTargetHow to collect
Endpoint protection coverageDevice security100%EDR/AV dashboard
Encryption at restData protection100% for sensitive dataCloud console audit
Backup success rateRecovery capability>99%Backup system logs
Secret scanning alertsExposed credentials0 unaddressedGitHub/GitLab security tab
Dependency vulnerabilitiesSupply chain risk0 critical, under 10 highDependabot, Snyk

Building a security dashboard

A dashboard turns raw metrics into a story. The right dashboard answers "how are we doing?" at a glance.

Dashboard design principles

1. Audience-appropriate:

  • Executive dashboard: 5-7 high-level metrics
  • Security team dashboard: Detailed operational metrics
  • Developer dashboard: Code and dependency metrics

2. Trend-focused:

  • Show change over time, not just current state
  • Use arrows, colors, or sparklines for trends
  • Compare to previous period

3. Actionable:

  • Link to details for investigation
  • Highlight what needs attention
  • Make next steps obvious

4. Honest:

  • Don't hide bad news
  • Show context for anomalies
  • Avoid cherry-picking good metrics

Executive dashboard template

Security Dashboard — March 2025 · Overall status: GOOD (8/10 metrics on target)

Critical metrics:

MetricValueTargetStatus
Open critical vulns00✓ OK
MFA adoption97%95%✓ OK
Phishing click rate4%under 5%✓ OK
MTTR critical2.5dunder 7d✓ OK
Salesforce MFA83%100%⚠ Action needed
Training completion88%100%⚠ Action needed

Trends vs. last quarter: Security incidents 15 → 12 (−20%) · Vulnerabilities fixed 45 → 52 (+16%) · Mean time to remediate 8d → 5d (−38%)

Key accomplishments: Deployed automated dependency scanning · Achieved 100% MFA for all admin accounts · Reduced critical vuln MTTR from 8 to 2.5 days

Next quarter focus: Complete Salesforce MFA rollout · Achieve 100% training compliance · Implement container scanning in CI/CD

Dashboard tools

ToolBest forCostEffort
Google Sheets/ExcelQuick start, simple metricsFreeLow
NotionVisual, easy sharingFree tierLow
Google Data StudioMultiple data sourcesFreeMedium
GrafanaTechnical metrics, time seriesFree (self-hosted)Medium
DatadogIntegration with monitoringPaidMedium
SplunkEnterprise, complex queriesPaidHigh
Custom appSpecific needsDev timeHigh

For small companies: Start with a spreadsheet. Seriously. A well-maintained Google Sheet beats an unmaintained Grafana instance.

Data collection automation

Manual data collection doesn't scale. Automate where possible:

# Example: Weekly metrics collection script
name: Weekly Security Metrics

on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 9 AM

jobs:
collect-metrics:
runs-on: ubuntu-latest
steps:
- name: Get vulnerability count
run: |
# Query GitHub Security tab
gh api repos/$ORG/$REPO/vulnerability-alerts --jq 'length'

- name: Get Dependabot alerts
run: |
gh api repos/$ORG/$REPO/dependabot/alerts --jq '[.[] | select(.state=="open")] | length'

- name: Get MFA status
run: |
# Query Google Workspace Admin API or IdP

- name: Post to Slack
run: |
curl -X POST $SLACK_WEBHOOK -d '{
"text": "Weekly Security Metrics: ..."
}'

Reporting to leadership

Metrics are useless if leadership doesn't see them. Regular reporting keeps security visible and demonstrates value.

Report frequency

ReportFrequencyAudienceContent
DashboardReal-timeSecurity teamAll metrics, detailed
Weekly summaryWeeklySecurity + IT leadsKey changes, alerts
Monthly reportMonthlyDepartment headsTrends, focus areas
Quarterly reviewQuarterlyExecutive teamStrategic view, achievements
Annual reportAnnuallyBoard (if applicable)Year in review, roadmap

Quarterly report template

# Security Quarterly Report — Q1 2025

## Executive Summary

Q1 was a strong quarter for security. We closed 52 vulnerabilities,
reduced our incident count by 20%, and achieved our MFA target for
critical systems. Two areas need attention: Salesforce MFA rollout
and training completion.

## Key Metrics

### Security Posture

| Metric | Q4 2024 | Q1 2025 | Target | Status |
|--------|---------|---------|--------|--------|
| Open Critical Vulns | 2 | 0 | 0 | Met |
| Open High Vulns | 12 | 8 | under 10 | Met |
| MTTR (Critical) | 8 days | 2.5 days | under 7 days | Met |
| MFA Adoption | 94% | 97% | 95% | Met |
| Phishing Click Rate | 7% | 4% | under 5% | Met |

### Trend Analysis

[Include sparklines or simple charts showing 4-quarter trends]

## Accomplishments

### Completed initiatives
1. **Automated dependency scanning** — Now running on all repositories,
catching vulnerable packages before deployment
2. **Admin MFA enforcement** — 100% of admin accounts now require
hardware security keys
3. **Incident response improvement** — Reduced critical incident
MTTR from 8 to 2.5 days through updated runbooks

### Metrics improvements
- Closed 52 vulnerabilities (up from 45 last quarter)
- Security incidents down 20% (15 vs. 19)
- Phishing resilience improved: 4% click rate (down from 7%)

## Incidents

### Summary
- Total incidents: 15 (down from 19)
- Critical: 0 | High: 2 | Medium: 5 | Low: 8

### Notable incidents
1. **Compromised contractor account (High)** — February 12
- Detected within 2 hours, contained within 4 hours
- No customer data accessed
- Root cause: Shared credentials; fixed by enforcing individual accounts

2. **Phishing campaign targeting finance (Medium)** — March 5
- 3 employees clicked, 8 reported
- No credential compromise
- Response: Additional training for finance team

## Risk Areas

### Issues requiring attention

1. **Salesforce MFA (Medium priority)**
- Current: 83% adoption
- Blocker: Some users on legacy mobile devices
- Plan: Device upgrade for 3 remaining users by April 15

2. **Training completion (Medium priority)**
- Current: 88% complete
- Blocker: New hires in onboarding queue
- Plan: Complete by April 30

### Emerging risks
- Increased targeting of our industry (per threat intel)
- New dependency vulnerabilities in React ecosystem

## Resource Utilization

| Initiative | Planned hours | Actual hours | Notes |
|------------|--------------|--------------|-------|
| Vulnerability remediation | 80 | 75 | On track |
| Security training | 20 | 25 | Additional sessions requested |
| Incident response | 40 | 35 | Fewer incidents |
| Policy updates | 20 | 15 | Completed early |
| Tool evaluation | 20 | 30 | Extended eval for SIEM |

## Next Quarter Plan

### Q2 2025 Priorities

1. **Complete MFA rollout** — 100% coverage including Salesforce
2. **Container security** — Implement image scanning in CI/CD
3. **Security Champions expansion** — Train 2 additional champions
4. **Tabletop exercise** — Quarterly IR practice

### Upcoming milestones
- April 15: Salesforce MFA complete
- April 30: Training 100% complete
- May 15: Container scanning deployed
- June 1: New Security Champions onboarded

## Budget

| Category | Q1 Budget | Q1 Actual | Variance |
|----------|-----------|-----------|----------|
| Tools | $5,000 | $4,500 | -$500 |
| Training | $2,000 | $2,200 | +$200 |
| Contractor | $3,000 | $2,500 | -$500 |
| Certifications | $1,000 | $800 | -$200 |
| **Total** | **$11,000** | **$10,000** | **-$1,000** |

## Recommendations

1. **Approve budget for SIEM pilot** — $3,000/quarter for 90-day eval
2. **Add security requirements to vendor contracts** — Legal review needed
3. **Consider bug bounty program** — Research for Q3 implementation

## Appendix

### A. Full metrics table
[Detailed metrics data]

### B. Incident list
[All incidents with classification]

### C. Tool inventory
[Current security tools and status]

Presenting to executives

When you present to leadership:

1. Lead with the headline: "Security is in good shape. Two items need attention."

Not: "Let me walk you through our 47 metrics..."

2. Use traffic light colors:

  • Green: On target
  • Yellow: Needs attention
  • Red: Critical issue

3. Show trends, not just numbers: "Phishing clicks dropped from 18% to 4% over the year" tells a story. "4% phishing click rate" is just a number.

4. Connect to business impact:

  • "This vulnerability could have exposed customer data"
  • "MFA prevents the #1 cause of breaches in our industry"
  • "Faster incident response = less downtime = less revenue loss"

5. Be honest about gaps: Leadership respects honesty. Hiding problems erodes trust.

6. Come with solutions: For every problem, have a plan. "Salesforce MFA is at 83%. We'll be at 100% by April 15."

7. Ask for what you need: Make the ask clear. "I need approval for a $3,000 SIEM pilot."

Handling tough questions

QuestionHow to answer
"Are we secure?""No organization is 100% secure. We're managing our top risks effectively. Here's evidence: [metrics]"
"How do we compare to others?""Industry benchmarks show [X]. We're at [Y], which is [above/below/at] average for companies our size."
"Why do we need more budget?""Current investment gives us [X coverage]. Additional budget would address [specific gap] which represents [risk]."
"Can you guarantee no breaches?""No one can guarantee that. We can reduce likelihood and impact. Here's what we're doing: [specifics]"
"What's our biggest risk?""[Specific risk] because [reason]. We're addressing it by [action]."

Setting targets and goals

Metrics need targets. Without goals, numbers are just numbers.

How to set realistic targets

1. Baseline first: Measure current state before setting targets. You can't improve what you don't know.

2. Industry benchmarks: Look at what similar companies achieve:

  • Phishing click rates: 5-15% is typical, under 5% is good
  • MTTR for critical vulns: 7-30 days is typical, under 7 is good
  • MFA adoption: 85-95% is typical, aim for 100%

3. Consider constraints:

  • Team size
  • Budget
  • Technical debt
  • Competing priorities

4. Incremental improvement: If you're at 15% phishing clicks, don't target 2%. Aim for 10% first.

Industry benchmarks: where to find them

Don't set targets in a vacuum. Use industry data:

SourceWhat it coversLinkUpdate frequency
Verizon DBIRBreach patterns, attack vectors, industry trendsverizon.com/dbirAnnual
SANS Security Awareness ReportPhishing rates, training effectivenesssans.org/security-awareness-trainingAnnual
Ponemon InstituteCost of breach, time to detect/containponemon.orgAnnual
Mandiant M-TrendsDwell time, attack techniquesmandiant.com/m-trendsAnnual
CrowdStrike Global Threat ReportAttack trends, breakout timecrowdstrike.com/reportsAnnual
KnowBe4 Benchmarking ReportPhishing prone percentage by industryknowbe4.com/phishing-benchmarkingAnnual

Key benchmarks to know (2024 data):

MetricIndustry averageGoodExcellent
Phishing click rate (untrained)30-35%under 15%under 5%
Phishing click rate (trained)10-15%under 5%under 2%
MTTD (mean time to detect)197 daysunder 30 daysunder 7 days
MTTC (mean time to contain)69 daysunder 14 daysunder 3 days
Patch compliance (critical, 7 days)60%>85%>95%
MFA adoption60-70%>90%100%

Sources: Verizon DBIR 2024, Ponemon Cost of Data Breach 2024, KnowBe4 2024

Example: Annual security goals

## Security Goals — 2025

### Vulnerability Management
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| Zero critical vulns | 2 avg | 0 | Ongoing |
| MTTR critical | 8 days | 3 days | Q2 |
| MTTR high | 21 days | 14 days | Q3 |
| Patch compliance | 85% | 95% | Q2 |

### Access Management
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| MFA adoption | 94% | 100% | Q1 |
| Privileged accounts | 25 | 15 | Q2 |
| Access review cycle | Annual | Quarterly | Q2 |

### Awareness
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| Phishing click rate | 8% | 5% | Q2 |
| Training completion | 75% | 100% | Q1 |
| Report rate | 40% | 60% | Q3 |

### Incident Response
| Goal | Current | Target | By when |
|------|---------|--------|---------|
| MTTD | 4 hours | 1 hour | Q4 |
| MTTR | 6 hours | 2 hours | Q4 |
| Tabletop exercises | 1/year | 4/year | Ongoing |

Talking money: risk-based metrics

Leadership speaks the language of money. Translating security into financial terms makes your case stronger.

The FAIR methodology (simplified)

FAIR (Factor Analysis of Information Risk) provides a framework for quantifying risk in dollars. The full methodology is complex, but you can use the core concepts:

Risk ($) = Probability of Event × Impact of Event

Example: Credential stuffing attack
- Probability: 25% per year (based on industry data, our exposure)
- Impact: $200,000 (incident response, downtime, reputation)
- Annual risk: 0.25 × $200,000 = $50,000 expected loss

If MFA costs $5,000/year and reduces probability to 2%:
- New risk: 0.02 × $200,000 = $4,000
- Risk reduction: $50,000 - $4,000 = $46,000
- ROI: ($46,000 - $5,000) / $5,000 = 820%

Quick risk quantification template

For each major risk, estimate:

FactorHow to estimateExample
LikelihoodIndustry data + your controls25% per year
Primary lossDirect costs: IR, recovery, fines$50,000
Secondary lossReputation, customer loss, legal$150,000
Total impactPrimary + Secondary$200,000
Expected lossLikelihood × Impact$50,000/year

Cost of breach data points

Use these for your calculations (adjust for your company size):

Cost categorySmall company (under 100 emp)Mid-size (100-1000)Source
Average breach cost$120,000-$250,000$500,000-$2MPonemon 2024
Cost per record (PII)$150-$180$150-$180Ponemon 2024
Ransomware average$150,000-$300,000$500,000-$1.5MCoveware 2024
Business email compromise$50,000-$150,000$150,000-$500,000FBI IC3 2023
Downtime cost/hour$1,000-$10,000$10,000-$100,000Varies by industry

Calculating Security ROI

When requesting budget, show the math:

Security Investment ROI Calculator

Initiative: Implement SAST in CI/CD
Cost: $15,000/year (tool + setup time)

Without SAST:
- Vulns found in production: ~20/year
- Average fix cost in production: $5,000
- Total: $100,000/year + breach risk

With SAST:
- Vulns caught pre-production: 18/20 (90%)
- Vulns reaching production: 2/year
- Average fix cost in development: $500
- Production fixes: 2 × $5,000 = $10,000
- Pre-production fixes: 18 × $500 = $9,000
- Total: $19,000

Savings: $100,000 - $19,000 = $81,000
ROI: ($81,000 - $15,000) / $15,000 = 440%

Plus: Reduced breach probability (harder to quantify but real)

Expert tip: Don't over-engineer these calculations. Order of magnitude is enough. "$50K to $100K risk" is useful. "$73,847.23 risk" is false precision that undermines credibility.

Security maturity model

Where is your organization on the maturity curve? This helps set realistic goals and communicate progress.

Five levels of security maturity

LevelNameCharacteristics
5OptimizingContinuous improvement, predictive analytics, security enables business innovation
4ManagedMetrics-driven, quantitative risk management, security integrated into all processes
3DefinedDocumented processes, consistent practices, security policies enforced, regular training
2DevelopingSome processes exist, reactive security, basic controls in place, ad-hoc training
1InitialNo formal security program, chaos mode, react to incidents as they happen

Maturity assessment quick check

AreaLevel 1Level 2Level 3Level 4-5
Vulnerability managementFix when exploitedScan occasionallyRegular scans, SLAsAutomated, risk-based
Access controlShared passwordsIndividual accountsMFA, access reviewsZero trust, JIT access
Incident responsePanic modeBasic runbooksPracticed IR planAutomated response
TrainingNoneOne-time onboardingAnnual trainingContinuous, role-based
MetricsNoneAfter incidentsBasic dashboardPredictive analytics
Security in SDLCNoneFinal testing onlyIntegrated scanningShift-left, threat modeling

Metrics by maturity level

Different maturity levels need different metrics:

Level 1-2 (Getting started):

  • Do we have basic controls? (yes/no checklist)
  • Are critical systems protected? (binary)
  • Can we detect obvious attacks? (test results)

Level 2-3 (Building foundation):

  • Vulnerability counts and trends
  • MFA adoption percentage
  • Training completion rate
  • Incident count and severity

Level 3-4 (Maturing):

  • MTTR, MTTD trends
  • Risk-weighted vulnerability scores
  • Leading indicators (security in design)
  • Cost per vulnerability remediated

Level 4-5 (Optimizing):

  • Predictive risk scores
  • Security ROI by initiative
  • Business-aligned risk metrics
  • Benchmark comparisons

Metrics gaming: what to watch for

Any metric that becomes a target will be gamed. Anticipate this and design countermeasures.

Common gaming patterns

Gaming behaviorHow it looksThe real problem
Severity downgrade"This is Medium, not High"Critical vulns slip through SLA
Won't fix epidemicMTTR looks great, vulns still existRisk not actually reduced
Easy wins focus50 Low vulns closed, 2 Critical ignoredWrong priorities
Denominator manipulation"Only 5 systems need patches" (we excluded 50)Coverage gaps hidden
Timing gamesVuln discovered on April 1, logged April 5MTTD looks better than reality
Definition drift"That wasn't an incident, just an event"Incident count looks good, learning lost

How to prevent gaming

  1. Multiple correlated metrics: If MTTR is gamed by closing as "won't fix," also track "% of vulns accepted as risk" and require VP approval for risk acceptance.

  2. Outcome metrics alongside activity metrics: Track both "vulns closed" and "vulns found in production over time."

  3. Random audits: Periodically review a sample of closed vulns. Were they really fixed? Was severity accurate?

  4. Trend anomaly detection: Sudden improvements should trigger investigation. Did something really change, or did the measurement?

  5. Third-party validation: Periodic pentests reveal what your own metrics might hide.

Expert insight: The goal isn't to catch people gaming metrics—it's to design metrics that are hard to game while still measuring what matters. When gaming happens, it often reveals perverse incentives you didn't intend to create. Fix the incentives, not just the behavior.

Real stories: when metrics made a difference

Story 1: The phishing chart that got budget

A Security Champion at a 200-person company tracked phishing simulation results for six months. The first simulation: 28% click rate. After monthly simulations and targeted training: 4% click rate.

When presenting to the CEO, she didn't talk about "security awareness." She showed:

  • "28% of employees would have given credentials to attackers"
  • "Now it's 4%—a 7x improvement"
  • "The industry average breach involving phishing costs $150K"
  • "We reduced that risk for $3,000 in training investment"

Result: CEO approved budget for a proper security awareness platform and praised the initiative at the all-hands meeting.

Story 2: The MTTR that revealed a broken process

A company tracked MTTR for vulnerabilities and was proud of their 5-day average. Then a Security Champion dug deeper and found:

  • Critical vulns: 3 days (good)
  • High vulns: 8 days (okay)
  • Medium vulns: 45 days (hidden by the average)

The averaging masked that medium vulnerabilities were being ignored. Once exposed, the team implemented SLAs by severity and the real issue got addressed.

Story 3: The missing leading indicator

A startup measured incidents and vulnerabilities but nothing about their development process. They had zero incidents (good!) but also:

  • No security in code reviews
  • No SAST or dependency scanning
  • No security training for developers

A new Security Champion added leading indicators:

  • % of repos with automated security scanning: 0%
  • % of developers with security training: 12%
  • % of PRs with security review: 0%

This data made a compelling case that their low incident count was luck, not security. Six months later, after adding these practices, they caught their first critical vulnerability in CI/CD—before it reached production.

Story 4: When lack of metrics caused disaster

A mid-size company had no security metrics. Leadership assumed "no news is good news." Then came a ransomware attack:

  • No visibility into patch status — 40% of systems were months behind
  • No access inventory — couldn't identify compromised accounts
  • No incident metrics — no baseline for "normal" detection time
  • Recovery took 3 weeks; cost: $800K

Post-incident, the CEO's first question: "Why didn't we know we were vulnerable?" The answer: "We didn't measure it."

Storytelling: how to present metrics effectively

Data doesn't speak for itself. You need to tell a story.

The pyramid principle

Structure your presentation like a pyramid:

  1. Top: The conclusion — Start with the answer
  2. Middle: Key supporting points — 3-4 main reasons
  3. Bottom: Details — Data that supports each point
BAD structure:
"Let me walk you through our 47 metrics, and at the end
I'll tell you what it means..."

GOOD structure:
"Our security posture is strong, with two areas needing
attention. Here's why I say that: [3 points with data]"

The "So what?" test

For every metric, answer "So what?" before leadership asks:

MetricSo what?
"We have 12 high vulnerabilities""Each one could lead to a breach. Here's our plan to close them by [date]."
"Phishing clicks dropped to 4%""That's 7x better than when we started, and below industry average. Our investment is paying off."
"MTTR improved from 8 to 3 days""We're fixing critical issues before attackers can exploit them. Faster than 80% of companies our size."

Visualization that works

For this dataUse thisWhy
Trend over timeLine chartShows direction and velocity
Current status vs. targetGauge or progress barInstantly shows gap
Comparison across categoriesHorizontal bar chartEasy to compare
Multiple metrics summaryTraffic light tableQuick status scan
Before/afterTwo numbers side by sideClear improvement story

Timing your report

When you present matters:

  • Monday morning: Low energy, high distraction. Avoid.
  • Friday afternoon: Everyone mentally checked out. Avoid.
  • After a security incident: High attention, but reactive mindset.
  • Budget planning season: Perfect timing for investment asks.
  • After a positive milestone: Build on momentum.

Expert tip: Schedule regular time (monthly or quarterly) rather than ad-hoc updates. Consistency builds credibility.

Handling skepticism

ObjectionResponse
"These numbers seem too good""Here's the raw data and methodology. I'd rather under-claim than over-promise."
"Can we trust this data?""We cross-validate with [source]. Here's how we collect it."
"This seems complicated""The summary is simple: [one sentence]. Details are in appendix."
"Other priorities are more urgent""Understood. Here's the risk of deferring, quantified: [$$]"

Common mistakes

  1. Measuring everything — Too many metrics creates noise; focus on what matters
  2. Vanity metrics — "Security training hours" doesn't equal "employees are secure"
  3. Infrequent updates — Quarterly metrics with no updates between creates surprise
  4. No context — Raw numbers without benchmarks or trends are meaningless
  5. Hiding bad news — Leadership finds out eventually; better from you
  6. Manual collection only — Can't scale; automate what you can
  7. No targets — Numbers without goals don't drive improvement
  8. Too technical — CVSS scores mean nothing to the CEO
  9. Not connecting to business — "So what?" is a valid question
  10. Beautiful dashboards, no action — Metrics should drive decisions

Workshop: create your security dashboard

Part 1: Define your metrics (1 hour)

  1. Identify key metrics:

    • Pick 5-7 metrics from the lists above
    • Ensure coverage: vulnerabilities, access, incidents, awareness
    • Choose metrics you can actually measure now
  2. Set baselines:

    • Collect current state for each metric
    • Document how you're collecting data
    • Note any gaps in data availability
  3. Set targets:

    • Research benchmarks for your industry
    • Set realistic improvement goals
    • Define timeline for each target

Part 2: Build the dashboard (2 hours)

  1. Create structure:

    • Use Google Sheets, Notion, or your tool of choice
    • Build executive summary view
    • Build detailed metrics view
  2. Populate with data:

    • Enter current metrics
    • Add historical data if available
    • Create trend visualizations
  3. Add automation:

    • Identify data sources
    • Set up queries or integrations where possible
    • Document manual collection steps

Part 3: Write quarterly report (1 hour)

  1. Use the template:

    • Customize for your company
    • Fill in current quarter data
    • Add narrative for key points
  2. Review and refine:

    • Have someone else read it
    • Check for jargon
    • Verify all numbers

Artifacts to produce

After this workshop, you should have:

  • List of 5-7 key security metrics
  • Baseline measurements for each metric
  • Targets and timelines for each metric
  • Executive dashboard (spreadsheet or tool)
  • Detailed metrics tracking sheet
  • First quarterly report draft
  • Data collection schedule/automation plan

Self-check questions

  1. What's the difference between a vanity metric and a real metric?
  2. Why should you show trends rather than just current numbers?
  3. What are the five core metric categories for Security Champions?
  4. How often should you report to executive leadership?
  5. What should you do when metrics show bad news?
  6. How do you set realistic security targets?
  7. Why is automation important for metrics collection?
  8. What should an executive dashboard include?
  9. How do you connect security metrics to business impact?
  10. What's the first step before setting improvement targets?

How to explain this to leadership

The pitch: "I want to create a simple dashboard that shows our security status at a glance. You'll see where we're strong, where we need work, and how we're improving. No surprises — you'll always know where we stand."

What you need:

  • Access to security tool dashboards (vulnerability scanner, IdP, etc.)
  • 2-3 hours per month for data collection and reporting
  • 30 minutes quarterly with leadership for review

The value:

  • Visibility into security investments
  • Early warning of problems
  • Evidence for audit/compliance
  • Data for strategic decisions

First deliverable: "I'll have a baseline dashboard ready in 2 weeks. After that, monthly updates with a quarterly review."

Conclusion

Measure to improve, not to report. A dashboard nobody looks at is just overhead. Three metrics that drive action every month are worth more than twenty metrics that get presented once a quarter and forgotten.

Start small. A spreadsheet is fine. The habit of tracking matters more than the tool you use to track it.

What's next

This completes the security culture section. Next: advanced topics and long-term strategy — risk management, compliance, vendor security, and scaling the security program.