Skip to main content

Updates and vulnerability management

In 2017, Equifax got breached through an unpatched Apache Struts vulnerability. The patch had been available for two months. They just hadn't applied it. One of the largest data breaches in history — and entirely preventable with a timely update.

You're not Equifax, but the lesson applies. Most breaches don't come from sophisticated zero-day exploits. They happen because someone didn't update software with a known, published vulnerability. Attackers scan for outdated software constantly. When a critical CVE gets published, exploitation attempts start within hours.

This chapter sets up a system for knowing when your software has vulnerabilities and actually fixing them before attackers find them.

Why updates get neglected

Everyone knows updates are important. So why do they get skipped?

"It might break something." This is the big one. You update a library and suddenly half your tests fail. You patch the OS and a critical service stops working. After a few bad experiences, people get cautious. Updates pile up.

Nobody owns it. Developers assume ops handles updates. Ops assumes developers handle their dependencies. Nobody's checking the servers that "just work."

No visibility. You don't know what versions you're running. You don't know which CVEs affect you. You find out when something breaks or when a security researcher emails you.

It takes time. Running updates, testing, deploying — it's not glamorous work. When there's product work to ship, updates get pushed to "next sprint" indefinitely.

The Security Champion doesn't personally patch everything. But they create visibility and accountability so updates actually happen.

The two types of updates

Think of updates in two categories:

Operating system and infrastructure

These are your servers, containers, cloud services. Linux kernel updates, package updates, Docker base image updates.

Who typically owns this: DevOps, sysadmins, or whoever manages infrastructure.

Update strategy: Automatic where possible, scheduled windows for manual updates.

Risk if neglected: Remote code execution, privilege escalation, full system compromise.

Application dependencies

These are libraries and packages your code depends on. npm packages, Python libraries, Ruby gems, Go modules.

Who typically owns this: Developers, but often nobody specifically.

Update strategy: Automated PRs from Dependabot/Renovate, regular review cycles.

Risk if neglected: Supply chain attacks, known vulnerabilities in libraries you use.

Both matter. A fully patched OS doesn't help if your application has a vulnerable version of Log4j.

Setting up automatic updates

Automatic updates sound scary until you realize the alternative is no updates at all.

Linux servers

For Ubuntu/Debian servers, enable unattended-upgrades:

sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

This installs security updates automatically. You can configure what gets auto-updated in /etc/apt/apt.conf.d/50unattended-upgrades:

Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
"${distro_id}ESMApps:${distro_codename}-apps-security";
};

// Automatically reboot if required
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";

For RHEL/CentOS/Rocky:

sudo dnf install dnf-automatic
sudo systemctl enable --now dnf-automatic-install.timer

Should you auto-reboot? For non-critical servers, yes. For production, you probably want to schedule reboots during maintenance windows. But kernel updates don't take effect until reboot — if you never reboot, you're running vulnerable kernels.

Docker containers

Your containers inherit vulnerabilities from their base images. If you're running python:3.9 from two years ago, you're running with two years of unpatched vulnerabilities.

Strategy 1: Regular rebuilds

Rebuild and redeploy containers weekly, even if your code hasn't changed. This pulls fresh base images with latest patches.

Add to your CI pipeline:

# GitHub Actions example
on:
schedule:
- cron: '0 3 * * 0' # Every Sunday at 3 AM
push:
branches: [main]

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build with fresh base image
run: docker build --pull -t myapp:latest .

The --pull flag forces Docker to check for updated base images.

Strategy 2: Use minimal base images

Fewer packages = fewer vulnerabilities. Consider:

  • alpine instead of ubuntu for simple applications
  • distroless images for production (no shell, minimal attack surface)
  • Language-specific slim images (python:3.11-slim, node:20-slim)

Cloud services

AWS, GCP, and Azure handle infrastructure updates for managed services. But you're still responsible for:

  • EC2/Compute Engine/VM instances — these are your servers, update them
  • Container images in ECS/EKS/GKE — rebuild regularly
  • Lambda/Cloud Functions runtime versions — upgrade when new runtimes release
  • Managed databases — some updates require scheduled maintenance windows

Check your cloud console for outdated resources. AWS has Trusted Advisor (free tier limited), GCP has Security Command Center.

Dependency updates with Dependabot (or Renovate)

GitHub's Dependabot is free and handles the most tedious part of dependency updates: knowing they exist. If you're on GitLab, Bitbucket, or want more configuration options, Renovate is an excellent alternative with similar functionality.

Setting up Dependabot

Create .github/dependabot.yml in your repository:

version: 2
updates:
# JavaScript/npm
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10

# Python
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"

# Docker
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"

# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

Dependabot will now create pull requests when dependencies have updates.

Making Dependabot useful

The problem with Dependabot is PR fatigue. It opens 15 PRs a week and people start ignoring them.

Group non-security updates:

- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
groups:
dev-dependencies:
patterns:
- "*"
update-types:
- "minor"
- "patch"

This groups minor and patch updates into single PRs, reducing noise.

Prioritize security updates:

Dependabot security updates are separate from version updates. Enable them in your repository settings (Settings → Security → Dependabot alerts). These create PRs specifically for known vulnerabilities.

Set up auto-merge for low-risk updates:

# In your CI workflow
- name: Auto-merge Dependabot PRs
if: github.actor == 'dependabot[bot]'
run: gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Only do this if you have good test coverage. Otherwise you're auto-deploying breakage.

Vulnerability scanning beyond Dependabot

Dependabot checks your declared dependencies. But there's more to scan.

Snyk

Snyk offers a free tier that includes:

  • Dependency scanning (like Dependabot, but with more context)
  • Container image scanning
  • Infrastructure as Code scanning

For small teams, the free tier is usually enough. Integrate it into your CI:

# GitHub Actions
- name: Run Snyk
uses: snyk/actions/node@master # or /python, /docker, etc.
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

Snyk gives you severity ratings and exploitability context that helps prioritize fixes.

Trivy for containers

Trivy is free and scans container images for OS and library vulnerabilities:

# Scan a local image
trivy image myapp:latest

# Scan in CI
trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:latest

The --exit-code 1 makes your pipeline fail if high/critical vulnerabilities are found. Useful for preventing vulnerable images from deploying.

GitHub security features

GitHub offers several free security features:

  • Dependabot alerts — Notifications when dependencies have known vulnerabilities
  • Code scanning — SAST (static analysis) using CodeQL
  • Secret scanning — Finds accidentally committed credentials

Enable all of these in Settings → Security. They're free for public repos and included with GitHub Advanced Security for private repos.

Tools for vulnerability management

Here's a comprehensive list of tools, organized by category and cost.

Dependency scanning

Free/Open source:

  • Dependabot — Built into GitHub, scans dependencies and opens PRs. Zero setup for GitHub repos.
  • Renovate — More flexible than Dependabot, supports more platforms (GitLab, Bitbucket, Azure DevOps). Self-hosted or use Mend's free hosted version.
  • OWASP Dependency-Check — Scans Java, .NET, Node.js, Python, Ruby. CLI tool, good for CI/CD. Completely free.
  • Grype — Fast vulnerability scanner from Anchore. Scans container images and filesystems. Open source.
  • Safety — Python-specific dependency checker. Free CLI, checks against PyUp.io database.
  • npm audit / yarn audit — Built into npm and yarn. Run npm audit to see vulnerabilities in your node_modules.

Commercial with free tiers:

  • Snyk — Free tier: 200 tests/month, unlimited for open source. Covers dependencies, containers, IaC.
  • Socket — Focuses on supply chain attacks (typosquatting, malicious packages). Free for open source.
  • Mend (formerly WhiteSource) — Free for open source projects via Mend Bolt. Enterprise features for paid.

Enterprise:

  • Sonatype Nexus Lifecycle — Deep component intelligence, policy enforcement. Expensive but comprehensive.
  • Veracode SCA — Part of Veracode's application security platform.
  • Black Duck — From Synopsys. Full license compliance and security analysis.

Container security

Free/Open source:

  • Trivy — The go-to free scanner. Covers images, filesystems, git repos, Kubernetes. Active development.
  • Grype — Faster than Trivy in some benchmarks, good vulnerability matching.
  • Clair — From Red Hat/Quay. Designed for container registries. More complex setup.
  • Docker Scout — Built into Docker Desktop and Docker Hub. Free tier available.
  • Cosign — Signs and verifies container images. Part of Sigstore project.

Commercial:

Infrastructure vulnerability scanning

Free/Open source:

  • OpenVAS — Full-featured vulnerability scanner. Community edition is free. Scans networks and hosts for vulnerabilities.
  • Nuclei — Fast, template-based scanner. Great for custom checks. Very active community.
  • Nmap — Not a vulnerability scanner per se, but essential for discovery. Use with scripts for basic vuln detection.
  • Lynis — Security auditing for Linux/Unix. Checks configuration, not vulnerabilities.

Commercial:

  • Nessus — Industry standard. Essentials version (16 IPs) is free. Professional starts ~$3,000/year.
  • Qualys — Cloud-based scanning platform. Enterprise pricing.
  • Rapid7 InsightVM — Vulnerability management with remediation tracking.
  • AWS Inspector — Scans EC2 and container images in AWS. Pay per scan.

Patch management

Free/Open source:

  • Ansible — Automate patching across servers. Free and open source. Steep learning curve but powerful.
  • Puppet / Chef — Configuration management that can enforce patch levels. Open source cores.

Cloud-native:

Commercial:

  • Automox — Cloud-native patch management. Works across Windows, macOS, Linux. Pricing per endpoint.
  • ManageEngine Patch Manager Plus — On-prem or cloud. Supports many OSes and third-party apps.
  • Ivanti — Enterprise patch management.

SBOM (Software Bill of Materials)

SBOMs are becoming essential for compliance and supply chain security. They list all components in your software.

Free/Open source:

  • Syft — Generates SBOMs from container images and filesystems. Outputs SPDX and CycloneDX formats.
  • Trivy — Also generates SBOMs. Use trivy image --format spdx myimage:latest.
  • CycloneDX — SBOM standard with tools for many languages (Maven, npm, pip, etc.).

Commercial:

  • Anchore Enterprise — SBOM generation plus policy enforcement and compliance.
  • Dependency-Track — Open source platform for tracking SBOMs and vulnerabilities across your portfolio. Self-hosted.

What to choose for a small company

If you're just starting, here's a practical stack:

  1. Dependabot or Renovate for dependency updates (free)
  2. Trivy for container scanning (free)
  3. GitHub security features for code scanning and secret detection (free for public repos)
  4. OpenVAS or Nessus Essentials for infrastructure scanning (free)
  5. Ansible for patch automation if you have many servers (free)
  6. Syft for SBOM generation if you need it for compliance (free)

Add commercial tools when:

  • You need centralized dashboards across many repos
  • Compliance requires specific reporting
  • You want prioritization intelligence (which vulns are actually exploitable)
  • You're scaling beyond what free tiers support

Building a CVE monitoring process

Knowing about vulnerabilities in your specific stack requires more than automated scanning.

Know your stack

First, document what you're running. At minimum:

  • Programming languages and versions (Python 3.11, Node 20, etc.)
  • Frameworks (Django 4.2, Express 4.18, React 18)
  • Databases (PostgreSQL 15, MongoDB 7)
  • Infrastructure (Ubuntu 22.04, nginx 1.24, Redis 7)
  • Critical SaaS dependencies (Stripe SDK, AWS SDK, etc.)

This should take 30 minutes to compile. Store it somewhere everyone can access — your SaaS inventory spreadsheet is a good place.

Set up alerts for your stack

Option 1: CVE databases directly

Subscribe to alerts from:

  • NVD — The National Vulnerability Database, comprehensive but noisy
  • CISA KEV — Known Exploited Vulnerabilities, smaller list of actively exploited issues

Option 2: Vendor security pages

Most major projects have security announcement lists:

Subscribe to the RSS feeds or mailing lists for your critical dependencies.

Option 3: Aggregated alerts

The weekly CVE check

Set a recurring calendar event: "CVE Review" — 30 minutes, once a week.

During this time:

  1. Check Dependabot/Snyk alerts that opened this week
  2. Scan CISA KEV for new entries matching your stack
  3. Skim security news for major vulnerabilities (a quick RSS check or Twitter/Mastodon follow of security researchers)
  4. Prioritize what needs patching this week vs. next month

This isn't about reading every CVE. It's about not being surprised by the critical ones.

Prioritizing what to update

Not every vulnerability needs immediate action. Here's a practical prioritization framework:

Patch immediately (within 24-48 hours)

  • Critical/High severity AND actively exploited in the wild
  • Affects internet-facing systems
  • CISA adds it to the KEV list

Examples: Log4Shell, ProxyLogon, recent zero-days in browsers.

Patch this week

  • Critical/High severity, not yet exploited
  • Affects internal systems with sensitive data
  • Has a working proof-of-concept exploit published

Patch this month

  • Medium severity
  • Requires specific conditions to exploit
  • Affects non-critical systems

Evaluate during regular cycles

  • Low severity
  • Theoretical vulnerabilities without practical exploits
  • Defense-in-depth concerns

When in doubt, look at the CVSS score and whether there's active exploitation. A theoretical vulnerability with no known exploit is less urgent than a moderate vulnerability with a Metasploit module.

How to read a CVE and assess real risk

When a CVE appears in your alerts, here's how to quickly assess whether to panic:

1. Check the CVSS score and vector

CVSS scores range from 0-10. But the score alone isn't enough — look at the vector:

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Key parts:

  • AV (Attack Vector): N=Network (worst), A=Adjacent, L=Local, P=Physical
  • AC (Attack Complexity): L=Low (easy to exploit), H=High (needs specific conditions)
  • PR (Privileges Required): N=None (worst), L=Low, H=High
  • UI (User Interaction): N=None (worst), R=Required

A network-accessible, low-complexity, no-auth vulnerability is worse than a high-complexity local exploit requiring admin access — even if both score 8.0.

2. Check for known exploitation

  • Is it in CISA KEV? → Attackers are actively using it
  • Is there a Metasploit module? → Script kiddies can exploit it
  • Are there reports of exploitation in the wild? → Check Twitter/Mastodon, security news

A CVE with active exploitation is urgent regardless of CVSS score.

3. Check if you're actually vulnerable

  • Do you use the affected component?
  • Is the vulnerable function/feature enabled?
  • Is it exposed to the network or only internal?

Example: A critical vulnerability in Apache's mod_proxy doesn't affect you if you use nginx.

4. Check if you're already protected

Sometimes other defenses mitigate the risk:

  • WAF rules blocking the attack
  • Network segmentation limiting exposure
  • The vulnerable service isn't internet-accessible

This doesn't mean "don't patch," but it affects priority.

5. Read the advisory, not just the headline

Security news often sensationalizes. The actual vendor advisory tells you:

  • Exactly which versions are affected
  • What the fix is
  • Workarounds if you can't update immediately
  • Whether the vulnerability requires specific configuration

Creating accountability

Updates don't happen without ownership.

Assign update responsibilities

In your SaaS inventory or a separate document, assign owners:

SystemUpdate ownerUpdate frequencyLast updated
Ubuntu serversDevOps leadWeekly auto + monthly manual2024-01-15
Docker base imagesEach teamWeekly rebuildsContinuous
npm dependenciesFrontend teamWeekly Dependabot review2024-01-12
Python dependenciesBackend teamWeekly Dependabot review2024-01-12
PostgreSQLDevOps leadQuarterly or critical patches2024-01-01

Track update debt

Keep a simple log of overdue updates:

ComponentCurrent versionLatest versionDays behindBlocker
React17.0.218.2.0180Breaking changes need testing
lodash4.17.154.17.2190None, just neglected

Review this monthly. Things that sit for 90+ days need a decision: update, accept the risk, or replace the dependency.

The monthly update meeting

15 minutes, once a month. Attendees: Security Champion, DevOps lead, one dev per team.

Agenda:

  1. Review critical CVEs from the past month (2 min)
  2. Check update debt log for anything overdue (3 min)
  3. Decide what gets prioritized next month (5 min)
  4. Review any update failures or rollbacks (5 min)

That's it. Regular visibility prevents updates from falling through the cracks.

Practical tips that save time and pain

These are the tricks that aren't in the official documentation but make your life easier.

Verify auto-updates are actually working

Set up auto-updates, then actually check they're running. A common failure: unattended-upgrades installed but never executing.

# Ubuntu/Debian: check last run
cat /var/log/unattended-upgrades/unattended-upgrades.log | tail -50

# Check if it's scheduled
systemctl status apt-daily-upgrade.timer

# RHEL/CentOS: check dnf-automatic
journalctl -u dnf-automatic-install.service --since "7 days ago"

If the logs are empty or the timer is disabled, your "automatic" updates aren't happening.

Quick version audit script

Before you can update, you need to know what you're running. This one-liner gives you a quick snapshot:

# Get all installed packages with versions (Debian/Ubuntu)
dpkg-query -W -f='${Package} ${Version}\n' > installed-packages.txt

# For Python projects
pip freeze > python-deps.txt

# For Node projects
npm list --depth=0 > node-deps.txt

# For Docker base images in your codebase
grep -r "FROM " --include="Dockerfile*" . | sort -u

Store these snapshots. When a CVE drops, you can quickly grep to see if you're affected.

The Friday rule

Never deploy updates on Friday afternoon. If something breaks, you either fix it over the weekend or leave it broken until Monday.

Best times to patch:

  • Tuesday or Wednesday morning (time to recover before weekend)
  • After a major release, not before (don't add variables before big deploys)
  • During low-traffic hours for production systems

Set your auto-update schedules accordingly. That 03:00 reboot time in the config examples isn't random.

Canary updates for production

Don't update all production servers at once. If you have multiple servers:

  1. Update one server first (the canary)
  2. Monitor for 15-30 minutes
  3. If no issues, update the rest
  4. If problems, rollback the canary before touching others

For Kubernetes, use rolling updates with readiness probes. For traditional servers, Ansible's serial parameter lets you update in batches.

Emergency rollback checklist

Before any significant update, know how to rollback:

For packages:

# Debian/Ubuntu: downgrade to specific version
apt install package=1.2.3-4

# See available versions
apt-cache policy package

For containers:

# Keep previous image tagged
docker tag myapp:latest myapp:previous
docker build -t myapp:latest .

# Rollback = redeploy previous tag
docker run myapp:previous

For databases: Take a snapshot before any database version upgrade. Seriously. Every time. Database rollbacks are painful without snapshots.

For cloud VMs: Snapshot the instance before major OS updates. AWS, GCP, and Azure all support this. It's cheap insurance.

Make changelogs your friend

When Dependabot or Renovate opens a PR, don't just merge blindly. Spend 60 seconds:

  1. Click the changelog link (usually in the PR description)
  2. Skim for "BREAKING CHANGES" or "deprecation"
  3. Check if it's a security fix (prioritize these)

Most updates are fine. But the one time you skip this and break production, you'll wish you'd spent the minute.

Slack/Teams alerts for critical CVEs

Set up automated alerts so you don't have to remember to check:

Option 1: RSS to Slack

Use IFTTT or Zapier to pipe RSS feeds to a Slack channel:

Option 2: GitHub to Slack

GitHub can send Dependabot alerts to Slack via webhooks. Settings → Integrations → Slack → Subscribe to security alerts.

Option 3: Snyk Slack integration

If you use Snyk, they have native Slack integration. New vulnerabilities in your projects post to your channel automatically.

The "vendor security page" bookmark folder

Create a browser bookmark folder with security pages for your critical dependencies. When a big CVE drops, you can quickly check if your vendors have updates:

  • Your Linux distro's security tracker
  • Your language's security page (Python, Node, etc.)
  • Your framework's security announcements
  • Your cloud provider's security bulletins

Takes 2 minutes to set up. Saves 20 minutes of googling during an incident.

Lock files: the unsung hero

Make sure you're committing lock files:

  • package-lock.json or yarn.lock for Node
  • Pipfile.lock or poetry.lock for Python
  • go.sum for Go
  • Gemfile.lock for Ruby

Lock files ensure everyone installs the exact same versions. Without them, npm install on your laptop might get different versions than npm install in CI.

Dependabot updates lock files automatically. If you're manually updating, always run the install command to regenerate the lock file.

Testing updates without full CI

For quick local testing of dependency updates:

# Create a branch
git checkout -b test-update

# Update the dependency
npm update lodash # or pip install --upgrade package

# Run tests locally
npm test

# If tests pass, commit and push
# If tests fail, reset
git checkout -- package.json package-lock.json

This is faster than waiting for CI for every small update. Use it for routine updates; use full CI for security patches.

The "known good" baseline

Keep a record of which versions work together. When everything is running smoothly:

# Capture working state
npm list --depth=0 > known-good-$(date +%Y%m%d).txt
pip freeze > known-good-python-$(date +%Y%m%d).txt

When debugging update issues, you can diff against your known-good state to see what changed.

Dealing with abandoned dependencies

Some dependencies stop getting security updates. Signs of abandonment:

  • No commits in 12+ months
  • No response to security issues
  • Maintainer announced end-of-life

For these:

  1. Check if there's an actively maintained fork
  2. Consider replacing with an alternative
  3. If you must keep it, audit the code yourself or isolate it

The npm ecosystem has particular problems with abandoned packages. Tools like Socket specifically flag these risks.

Workshop: set up updates and vulnerability monitoring

Block 2-3 hours for this.

Part 1: Enable automatic updates (45 minutes)

  1. Audit your servers for current update configuration
  2. Enable unattended-upgrades on Ubuntu/Debian servers
  3. Enable dnf-automatic on RHEL/CentOS/Rocky servers
  4. Verify auto-updates are working (check logs)
  5. Document any servers where you chose NOT to enable auto-updates and why

Part 2: Set up Dependabot (30 minutes)

  1. Create .github/dependabot.yml in your main repositories
  2. Configure for all relevant package ecosystems
  3. Enable Dependabot security alerts in repository settings
  4. Review any existing Dependabot PRs and merge or close them

Part 3: Add container scanning (30 minutes)

  1. Add Trivy or Snyk to one CI pipeline
  2. Configure it to fail on HIGH/CRITICAL vulnerabilities
  3. Scan your current images and document findings
  4. Fix or document the critical issues found

Part 4: Build your CVE monitoring process (30 minutes)

  1. Document your technology stack (languages, frameworks, infrastructure)
  2. Subscribe to security announcements for your 5 most critical dependencies
  3. Set up a weekly calendar reminder for CVE review
  4. Create your update ownership table

Deliverables:

  • Automatic updates enabled on all applicable servers
  • Dependabot configured for all repositories
  • Container scanning in at least one CI pipeline
  • Technology stack documented
  • Update ownership assigned
  • Weekly CVE review scheduled

Talking to leadership

If someone asks why you spent time on this:

"I set up automatic security updates for our servers and automated vulnerability scanning for our code and containers. Most breaches happen through known, patched vulnerabilities — attackers scan for outdated software. Now we'll know when we have vulnerable components and have a process to patch them before attackers find them. This took a few hours to set up and runs automatically."

Short version: "I automated our security patching so we don't get breached through known vulnerabilities."

Self-check: did you actually do it?

Before moving on, verify you've completed these items.

Automatic updates

  • Unattended-upgrades or dnf-automatic enabled on servers
  • Verified updates are actually running (checked logs)
  • Documented any servers excluded and why
  • Container images set to rebuild weekly (or manual process documented)

Dependency scanning

  • Dependabot enabled on main repositories
  • Configured for all relevant package ecosystems (npm, pip, docker, etc.)
  • Reviewed and actioned existing Dependabot PRs
  • Snyk or Trivy integrated into at least one CI pipeline

CVE monitoring

  • Technology stack documented (languages, frameworks, versions)
  • Subscribed to security announcements for critical dependencies
  • Weekly CVE review calendar reminder set
  • Checked CISA KEV against your stack

Accountability

  • Update ownership assigned for each system type
  • Update debt documented (what's behind and why)
  • Monthly update review scheduled

If you can check off at least 12 of these 15 items, you're ready to move on.

What's next

You now have visibility into your vulnerabilities and a process for patching them. The obvious holes are being closed automatically.

Next chapter: email security — SPF, DKIM, DMARC, and protecting your company from phishing and spoofing attacks.