Skip to main content

CI/CD pipeline security

Your CI/CD pipeline is the last line of defense before code reaches production. Every commit, every pull request, every deployment passes through it. That makes it the perfect place to catch security issues — vulnerabilities in your code, dangerous dependencies, misconfigurations, leaked secrets.

The goal isn't to block every build with security warnings. It's to catch the critical issues automatically, give developers fast feedback, and stop obvious problems from shipping. A well-configured pipeline finds SQL injection before code review, flags vulnerable dependencies before merge, and blocks known-bad patterns before they reach production.

This chapter covers the five types of security scanning you need in CI/CD: static analysis (SAST), dependency scanning (SCA), software bill of materials (SBOM), infrastructure as code (IaC) scanning, and dynamic testing (DAST). All using free tools that work with GitHub and GitLab.

Why this matters for small companies

Security scanning sounds like enterprise overhead. It's not. For small teams, automated scanning is even more valuable — you don't have dedicated security reviewers, so the pipeline has to do that work.

You can't review everything manually. A team of five developers might produce dozens of commits per day. Nobody has time to security-review every change. Automated scanners catch the obvious issues so humans can focus on architecture and logic.

Vulnerabilities ship faster in small teams. No change advisory boards, no multi-stage approvals, no waiting. Code goes from laptop to production in hours. That speed is an advantage, but it means vulnerabilities move fast too. Scanning in CI/CD is your safety net.

Dependency risk is real. Your application might be 10,000 lines of code you wrote and 500,000 lines of open-source dependencies you didn't. One vulnerable package in that stack can compromise everything. Snyk's 2024 report found that 80% of applications contain at least one known vulnerability in their dependencies.

Free tools are production-ready. Semgrep, Trivy, Dependabot, npm audit — these aren't toy projects. They're used by thousands of companies and catch real vulnerabilities. You don't need a $50,000 security platform to scan your code.

What attackers look for

When attackers target an application, they often start with the easy wins:

  1. Known CVEs in dependencies — public exploits exist, just need to find a target using the vulnerable version
  2. Exposed secrets — API keys, database passwords, cloud credentials in code or CI logs
  3. Injection vulnerabilities — SQL injection, command injection, XSS that automated tools detect reliably
  4. Misconfigured infrastructure — open S3 buckets, permissive CORS, missing security headers

Automated scanning catches most of these. The attackers use automated tools to find them — you should use automated tools to find them first.

Types of security scanning

Five categories cover most of what you need:

TypeWhat it doesWhen it runsExamples
SAST (Static Application Security Testing)Analyzes source code for vulnerabilitiesEvery commit, PRSemgrep, CodeQL, Bandit
SCA (Software Composition Analysis)Finds vulnerable dependenciesEvery buildDependabot, Snyk, Trivy
SBOM (Software Bill of Materials)Inventories all components, tracks licensesEvery releaseSyft, Grype, Dependency-Track
IaC ScanningFinds misconfigurations in infrastructure codeEvery commitCheckov, tfsec, KICS
DAST (Dynamic Application Security Testing)Tests running application for vulnerabilitiesStaging deploysOWASP ZAP, Nuclei

Each catches different problems:

  • SAST finds issues in code you write: SQL injection, XSS, insecure crypto
  • SCA finds issues in code you import: vulnerable libraries, outdated packages
  • SBOM tracks what's in your software: component inventory, license compliance, vulnerability correlation
  • IaC finds issues in infrastructure: open S3 buckets, overly permissive IAM, unencrypted storage
  • DAST finds issues in how it all runs: misconfigured servers, exposed endpoints, authentication bypasses

You need all five. They're complementary, not alternatives.

SAST: static code analysis

SAST tools read your source code and look for patterns that indicate vulnerabilities. No runtime, no deployment — just code analysis.

What SAST catches

VulnerabilityHow SAST finds it
SQL injectionString concatenation in SQL queries
XSSUnsanitized output in HTML templates
Command injectionUser input in shell commands
Path traversalUser input in file operations
Hardcoded secretsPatterns matching API keys, passwords
Insecure cryptoUse of MD5, SHA1 for passwords, weak random

Tool: Semgrep

Semgrep is the go-to SAST tool for small teams. It's fast, has good rules out of the box, and integrates with everything.

Install locally:

# macOS
brew install semgrep

# pip
pip install semgrep

Run a scan:

# Scan with default rules
semgrep --config auto .

# Scan with OWASP Top 10 rules
semgrep --config "p/owasp-top-ten" .

# Scan specific language
semgrep --config "p/python" .

GitHub Actions integration:

name: Security Scan

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
semgrep:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Run Semgrep
uses: semgrep/semgrep-action@v1
with:
config: >-
p/security-audit
p/secrets

GitLab CI integration:

semgrep:
stage: test
image: semgrep/semgrep
script:
- semgrep ci --config auto
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

Tool: CodeQL (GitHub only)

GitHub's CodeQL is powerful but only works on GitHub. It's free for public repos and included in GitHub Advanced Security for private repos.

Enable in repository settings:

  1. Go to Settings → Security → Code security and analysis
  2. Enable "Code scanning"
  3. Choose "CodeQL analysis"
  4. Select languages to scan

Or configure manually:

name: CodeQL

on:
push:
branches: [main]
pull_request:
branches: [main]
schedule:
- cron: '0 6 * * 1' # Weekly Monday 6 AM

jobs:
analyze:
runs-on: ubuntu-latest
permissions:
security-events: write

strategy:
matrix:
language: ['javascript', 'python']

steps:
- uses: actions/checkout@v4

- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

Language-specific tools

LanguageToolInstallRun
PythonBanditpip install banditbandit -r src/
JavaScriptESLint + security pluginnpm i eslint-plugin-securityeslint --ext .js src/
Gogosecgo install github.com/securego/gosec/v2/cmd/gosec@latestgosec ./...
RubyBrakemangem install brakemanbrakeman
PHPPsalmcomposer require --dev vimeo/psalm./vendor/bin/psalm

Handling SAST findings

Not every finding is a real vulnerability. SAST tools have false positives. Here's how to handle them:

Triage by severity:

SeverityActionTimeline
Critical/HighBlock merge, fix immediatelySame day
MediumFix before releaseThis sprint
LowTrack, fix when convenientBacklog

Suppress false positives properly:

# Semgrep: inline ignore
password = get_from_vault() # nosemgrep: hardcoded-password

# Bandit: inline ignore
subprocess.run(cmd, shell=True) # nosec B602

Or use a configuration file:

# .semgrep.yml
rules:
- id: my-custom-rule
# ... rule definition

# Ignore specific paths
exclude:
- "tests/*"
- "vendor/*"
- "*.test.js"

Don't suppress everything. If you're suppressing more than 10% of findings, either your code has problems or you're using the wrong ruleset.

SCA: dependency scanning

Your dependencies are an attack surface. SCA tools scan your package manifests (package.json, requirements.txt, go.mod) and flag packages with known vulnerabilities.

What SCA catches

  • Known CVEs — published vulnerabilities in specific package versions
  • License issues — GPL dependencies in proprietary code, license conflicts
  • Outdated packages — old versions missing security fixes
  • Malicious packages — typosquatting, compromised maintainers

Tool: Dependabot (GitHub)

Dependabot is built into GitHub. It scans dependencies, opens PRs to update vulnerable packages, and keeps everything current.

Enable in repository:

  1. Go to Settings → Security → Code security and analysis
  2. Enable "Dependabot alerts"
  3. Enable "Dependabot security updates"

Or configure with a file:

# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 10

- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"

- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"

Group updates to reduce PR noise:

# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
groups:
development-dependencies:
dependency-type: "development"
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"

Tool: Snyk

Snyk works with GitHub, GitLab, Bitbucket, and standalone. Free tier covers unlimited tests for open source projects and limited scans for private repos.

Install CLI:

npm install -g snyk
snyk auth

Scan locally:

# Test current project
snyk test

# Test and show remediation
snyk test --show-vulnerable-paths=all

# Monitor for new vulnerabilities
snyk monitor

GitHub Actions integration:

name: Snyk Security

on:
push:
branches: [main]
pull_request:

jobs:
snyk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Run Snyk
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high

GitLab CI integration:

snyk:
stage: test
image: snyk/snyk:node
script:
- snyk auth $SNYK_TOKEN
- snyk test --severity-threshold=high
allow_failure: true # Don't block on medium/low

Tool: Trivy

Trivy scans everything — container images, filesystems, git repos, Kubernetes manifests, Terraform. It's incredibly versatile.

Install:

# macOS
brew install trivy

# Linux
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

Scan filesystem (dependencies):

trivy fs --severity HIGH,CRITICAL .

Scan container image:

trivy image --severity HIGH,CRITICAL myapp:latest

GitHub Actions for container scanning:

name: Container Security

on:
push:
branches: [main]

jobs:
trivy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Build image
run: docker build -t myapp:${{ github.sha }} .

- name: Run Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'

Native package manager audits

Most package managers have built-in vulnerability scanning:

# npm
npm audit
npm audit fix

# yarn
yarn audit

# pip
pip-audit

# composer
composer audit

# bundler
bundle audit

# go
govulncheck ./...

Add to CI pipeline:

# GitHub Actions - npm audit
- name: Security audit
run: npm audit --audit-level=high

Handling dependency vulnerabilities

When a vulnerability has a fix:

  1. Update to the patched version
  2. Run tests
  3. Deploy

When there's no fix available:

  1. Check if you actually use the vulnerable function
  2. Assess real risk — is the vulnerable code path reachable?
  3. Consider alternative packages
  4. If acceptable risk, document and monitor

When updates break things:

  1. Check if it's a breaking change you can adapt to
  2. Pin to current version temporarily
  3. Create a ticket to properly fix
  4. Set a deadline — don't leave it forever

SBOM: Software Bill of Materials

An SBOM is a complete inventory of all components in your software — every library, framework, and dependency, with versions and origins. Think of it as an ingredients list for your application.

Why SBOM matters

Incident response speed. When Log4Shell hit in December 2021, organizations with SBOMs knew within hours which applications were affected. Those without spent days or weeks manually investigating. The U.S. Cybersecurity Executive Order 14028 now requires SBOMs for software sold to the federal government.

Supply chain visibility. Your app depends on package A, which depends on B, which depends on C. A vulnerability in C affects you, but you might not know C exists without an SBOM. Transitive dependencies are invisible without proper tooling.

License compliance. That MIT-licensed package you're using might depend on a GPL library. Without an SBOM, you won't know until legal comes knocking. License analysis requires knowing every component in your stack.

Customer requirements. Enterprise customers increasingly require SBOMs as part of vendor security assessments. Healthcare, finance, and government sectors often mandate them.

SBOM formats

Two main formats dominate:

FormatMaintainerBest forLink
SPDXLinux FoundationLicense compliance, broad adoptionspdx.dev
CycloneDXOWASPSecurity focus, vulnerability trackingcyclonedx.org

Both are machine-readable (JSON, XML) and interchangeable for most purposes. CycloneDX has better vulnerability correlation; SPDX has broader industry adoption for license compliance.

Generating SBOMs

Tool: Syft — the standard for SBOM generation

# Install
brew install syft
# or
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

# Generate SBOM from directory
syft dir:. -o cyclonedx-json > sbom.json

# Generate from container image
syft myapp:latest -o spdx-json > sbom.spdx.json

# Generate from package lock files
syft dir:. -o cyclonedx-json --catalogers javascript

GitHub Actions:

- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: myapp:${{ github.sha }}
format: cyclonedx-json
output-file: sbom.json

- name: Upload SBOM
uses: actions/upload-artifact@v4
with:
name: sbom
path: sbom.json

GitLab CI:

generate_sbom:
stage: build
image: anchore/syft
script:
- syft dir:. -o cyclonedx-json > sbom.json
artifacts:
paths:
- sbom.json

Other SBOM generators:

ToolTypeBest forLink
SyftCLIUniversal generationGitHub
TrivyCLICombined scanning + SBOMtrivy.dev
cdxgenCLICycloneDX nativeGitHub
Microsoft SBOM ToolCLI.NET, generalGitHub
SPDX SBOM GeneratorCLISPDX nativeGitHub

Analyzing SBOMs for vulnerabilities

Once you have an SBOM, scan it for known vulnerabilities:

Tool: Grype — vulnerability scanner that consumes SBOMs

# Install
brew install grype

# Scan SBOM for vulnerabilities
grype sbom:sbom.json

# Scan with severity filter
grype sbom:sbom.json --only-fixed --fail-on high

GitHub Actions:

- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: myapp:latest
output-file: sbom.json

- name: Scan SBOM for vulnerabilities
uses: anchore/scan-action@v3
with:
sbom: sbom.json
fail-build: true
severity-cutoff: high

Other vulnerability scanners for SBOMs:

ToolPriceFeaturesLink
GrypeFreeFast, SBOM-nativeGitHub
TrivyFreeUniversal scannertrivy.dev
OSV-ScannerFreeGoogle's OSV databaseGitHub
SnykFree tierBest remediation advicesnyk.io
OWASP Dependency-TrackFree (self-hosted)Full SBOM management platformdependencytrack.org
Anchore EnterprisePaidPolicy engine, complianceanchore.com
Sonatype Nexus LifecyclePaidDeep analysis, policiessonatype.com

Analyzing SBOMs for license compliance

License issues can be as damaging as security vulnerabilities — legal exposure, forced open-sourcing of proprietary code, or inability to ship to certain customers.

Common license concerns:

License typeConcernExample licenses
CopyleftMay require open-sourcing your codeGPL, AGPL, LGPL
Weak copyleftRequires attribution, some restrictionsMPL, EPL
PermissiveGenerally safe for commercial useMIT, Apache 2.0, BSD
UnknownNo license = all rights reservedUnlicensed packages

Tool: OWASP Dependency-Track — complete SBOM management

Dependency-Track is a free, self-hosted platform that ingests SBOMs and provides:

  • Vulnerability tracking across all projects
  • License compliance analysis
  • Policy enforcement
  • Historical tracking
# Run with Docker
docker run -d -p 8080:8080 dependencytrack/bundled

Then upload SBOMs via API or UI.

Commercial license compliance tools:

ToolPriceFeaturesLink
OWASP Dependency-TrackFree (self-hosted)Full SBOM platformdependencytrack.org
FOSSAFree tier, paid plansDeep license analysisfossa.com
SnykFree tierLicense scanning includedsnyk.io
Mend (WhiteSource)PaidEnterprise compliancemend.io
Black DuckEnterpriseDeep license intelligencesynopsys.com
Sonatype NexusPaidLicense policiessonatype.com
SnykFree tierUnified security + licensesnyk.io

Quick license scan with Trivy:

# Scan for licenses
trivy fs --scanners license .

# Check for specific problematic licenses
trivy fs --scanners license --license-full .

GitHub Actions for license compliance:

- name: License check
uses: fossas/fossa-action@main
with:
api-key: ${{ secrets.FOSSA_API_KEY }}

SBOM in CI/CD workflow

Integrate SBOM generation and analysis into your pipeline:

# Complete SBOM workflow
name: SBOM Pipeline

on:
push:
branches: [main]
release:
types: [published]

jobs:
sbom:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Build image
run: docker build -t myapp:${{ github.sha }} .

# Generate SBOM
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
image: myapp:${{ github.sha }}
format: cyclonedx-json
output-file: sbom.json

# Scan for vulnerabilities
- name: Vulnerability scan
uses: anchore/scan-action@v3
with:
sbom: sbom.json
fail-build: true
severity-cutoff: high

# Check licenses
- name: License scan
run: |
trivy fs --scanners license --severity HIGH,CRITICAL . \
--exit-code 1

# Store SBOM with release
- name: Upload SBOM to release
if: github.event_name == 'release'
uses: softprops/action-gh-release@v1
with:
files: sbom.json

SBOM best practices

  1. Generate with every build. SBOMs should be artifacts, versioned alongside your releases.

  2. Store SBOMs with releases. Attach to GitHub releases, store in artifact registry, or push to Dependency-Track.

  3. Automate vulnerability scanning. Don't just generate — analyze. Set up alerts for new CVEs affecting your components.

  4. Define license policies. Decide which licenses are acceptable for your use case. Block builds that introduce problematic licenses.

  5. Share with customers. If customers require SBOMs, automate delivery. Don't make it a manual process.

  6. Monitor continuously. New vulnerabilities are discovered daily. Scan stored SBOMs against updated vulnerability databases.

DAST: dynamic testing

DAST tools test running applications by sending requests and analyzing responses. They find issues that static analysis can't see — runtime misconfigurations, authentication bypasses, response header problems.

What DAST catches

IssueHow DAST finds it
Missing security headersChecks response headers
XSSInjects payloads, checks for reflection
SQL injectionInjects SQL, checks for errors/behavior changes
Authentication bypassTests access without valid credentials
CORS misconfigurationTests cross-origin requests
SSL/TLS issuesTests certificate, protocol versions

Tool: OWASP ZAP

ZAP (Zed Attack Proxy) is the standard open-source DAST tool. It can run automated scans or be used as an intercepting proxy for manual testing.

Docker-based CI scan:

# GitHub Actions
name: DAST Scan

on:
push:
branches: [main]

jobs:
zap:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Start application
run: |
docker compose up -d
sleep 30 # Wait for app to start

- name: ZAP Baseline Scan
uses: zaproxy/action-[email protected]
with:
target: 'http://localhost:3000'
rules_file_name: '.zap/rules.tsv'
allow_issue_writing: false

GitLab CI integration:

dast:
stage: test
image: ghcr.io/zaproxy/zaproxy:stable
script:
- mkdir -p /zap/wrk
- zap-baseline.py -t $STAGING_URL -r report.html -I
artifacts:
paths:
- report.html
only:
- main

Baseline vs Full scan:

Scan typeDurationCoverageUse case
Baseline1-5 minutesPassive checks, no attacksEvery deploy
Full30-60+ minutesActive attacks, thoroughWeekly, pre-release

Tool: Nuclei

Nuclei is fast, template-based, and great for specific vulnerability checks.

Install:

# macOS/Linux
brew install nuclei

# or
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest

Run scans:

# Default templates
nuclei -u https://staging.example.com

# Specific severity
nuclei -u https://staging.example.com -severity high,critical

# Specific templates
nuclei -u https://staging.example.com -t cves/ -t misconfigurations/

CI integration:

nuclei:
stage: test
image: projectdiscovery/nuclei
script:
- nuclei -u $STAGING_URL -severity high,critical -o results.txt
artifacts:
paths:
- results.txt

When to run DAST

DAST needs a running application, which complicates CI/CD integration:

Option 1: Test staging after deploy

deploy_staging:
stage: deploy
script:
- ./deploy-staging.sh

dast_scan:
stage: test
needs: [deploy_staging]
script:
- zap-baseline.py -t $STAGING_URL

Option 2: Spin up ephemeral environment

dast:
stage: test
services:
- docker:dind
script:
- docker compose up -d
- sleep 30
- nuclei -u http://docker:3000 -severity critical
- docker compose down

Option 3: Scheduled scans (not in PR pipeline)

# Run weekly, not on every commit
dast_weekly:
stage: security
script:
- zap-full-scan.py -t $PRODUCTION_URL -r report.html
only:
- schedules

IaC scanning: infrastructure as code

If you use Terraform, CloudFormation, or Kubernetes manifests, you need IaC scanning. Misconfigurations in infrastructure code are just as dangerous as application vulnerabilities — open S3 buckets, overly permissive IAM roles, unencrypted databases.

What IaC scanning catches

IssueExample
Public S3 bucketsacl = "public-read"
Overly permissive IAMAction: "*", Resource: "*"
Unencrypted storageMissing encrypted = true
Open security groups0.0.0.0/0 ingress
Missing loggingCloudTrail, VPC flow logs disabled
Hardcoded secretsAPI keys in Terraform variables

Tool: Checkov

Checkov is the most comprehensive free IaC scanner. It supports Terraform, CloudFormation, Kubernetes, Helm, ARM, and more.

Install:

pip install checkov

Scan locally:

# Scan Terraform directory
checkov -d terraform/

# Scan specific file
checkov -f main.tf

# Output as JSON for CI
checkov -d terraform/ -o json

GitHub Actions:

- name: Checkov
uses: bridgecrewio/checkov-action@master
with:
directory: terraform/
soft_fail: false
skip_check: CKV_AWS_18,CKV_AWS_21 # Skip specific checks if needed

GitLab CI:

checkov:
stage: test
image: bridgecrew/checkov
script:
- checkov -d terraform/ --output cli --output junitxml --output-file-path console,results.xml
artifacts:
reports:
junit: results.xml

Tool: tfsec (Terraform-specific)

If you only use Terraform, tfsec is faster and Terraform-focused.

Install:

brew install tfsec

Scan:

tfsec terraform/
tfsec terraform/ --severity-override=HIGH,CRITICAL

GitHub Actions:

- name: tfsec
uses: aquasecurity/tfsec-[email protected]
with:
working_directory: terraform/

Handling IaC findings

IaC findings often require context. A "public S3 bucket" might be intentional (hosting a static website) or a critical issue (storing customer data).

Suppress intentional configurations:

# tfsec:ignore:aws-s3-no-public-access
resource "aws_s3_bucket" "website" {
# This bucket intentionally hosts public website
bucket = "company-website-assets"
}
# Checkov skip
resource "aws_s3_bucket" "website" {
#checkov:skip=CKV_AWS_18:This bucket hosts public website
bucket = "company-website-assets"
}

Putting it together

A complete security scanning setup for a typical web application:

GitHub Actions example

name: Security Pipeline

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
# SAST - runs on every commit
sast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Semgrep
uses: semgrep/semgrep-action@v1
with:
config: p/security-audit p/secrets

# SCA - runs on every commit
sca:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Install dependencies
run: npm ci

- name: npm audit
run: npm audit --audit-level=high

- name: Snyk test
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high
continue-on-error: true # Alert but don't block

# Container scanning - if building images
container:
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v4

- name: Build image
run: docker build -t app:${{ github.sha }} .

- name: Trivy scan
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
severity: 'CRITICAL,HIGH'
exit-code: '1'

# DAST - only on main branch after deploy
dast:
runs-on: ubuntu-latest
needs: [sast, sca]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4

- name: Deploy to staging
run: ./deploy-staging.sh

- name: ZAP Baseline
uses: zaproxy/action-[email protected]
with:
target: ${{ secrets.STAGING_URL }}

GitLab CI example

stages:
- test
- build
- security
- deploy

variables:
DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

# SAST
semgrep:
stage: test
image: semgrep/semgrep
script:
- semgrep ci --config auto
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

# SCA - dependencies
dependency_scan:
stage: test
image: node:20
script:
- npm ci
- npm audit --audit-level=high
allow_failure: true

# SCA - Snyk
snyk:
stage: test
image: snyk/snyk:node
script:
- snyk auth $SNYK_TOKEN
- snyk test --severity-threshold=high
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

# Build container
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker build -t $DOCKER_IMAGE .
- docker push $DOCKER_IMAGE

# Container scanning
trivy:
stage: security
image: aquasec/trivy
script:
- trivy image --exit-code 1 --severity HIGH,CRITICAL $DOCKER_IMAGE
needs: [build]

# DAST on staging
dast:
stage: security
image: ghcr.io/zaproxy/zaproxy:stable
script:
- zap-baseline.py -t $STAGING_URL -I
needs: [deploy_staging]
only:
- main

When scanners find issues: Security Champion response playbook

Your pipeline blocked a merge request. Or worse — it flagged something in code that's already in production. What now?

This section provides a structured approach for Security Champions to handle security findings, from initial triage to resolution. The process follows modern vulnerability management methodologies including CVSS, SSVC, and VEX.

Step 1: initial triage (5-10 minutes)

Before diving deep, quickly assess what you're dealing with:

Gather basic information:

QuestionWhere to find it
What scanner found it?Pipeline output, tool name
What's the finding type?CVE ID, CWE, rule ID
Where is it?File, line number, dependency
What severity does the tool report?Critical/High/Medium/Low
Is this new code or existing?PR diff, git blame

Quick classification:

Initial triage questions:

  • Is this a known false positive? → Yes: document and suppress (Step 6). No: continue to Step 2.
  • Is this in production right now? → Yes: parallel track — assess production exposure (Step 5). No: continue normal flow.
  • Is this blocking a critical deployment? → Yes: escalate immediately, get more eyes. No: continue methodically.

Step 2: verify the finding (15-30 minutes)

Not every scanner alert is a real vulnerability. Verify before investing more time.

For SAST findings (code vulnerabilities):

  1. Read the flagged code. Understand what it does.
  2. Check the data flow. Does user input actually reach this code?
  3. Trace the input source. Is it validated/sanitized upstream?
  4. Check for existing protections. Framework security features, WAF rules, input validation.
# Example: Semgrep flags this as SQL injection
query = f"SELECT * FROM users WHERE id = {user_id}"

# Questions to answer:
# 1. Where does user_id come from? (request parameter? internal service?)
# 2. Is it validated before this line? (type check? allowlist?)
# 3. Is there a WAF or parameterization layer above?

For SCA findings (vulnerable dependencies):

  1. Check if you use the vulnerable function. Most CVEs affect specific features.
  2. Read the CVE description. What's the attack vector?
  3. Check your code. Do you call the vulnerable API?
  4. Check transitive exposure. Does another dependency use the vulnerable feature?
# Find where a dependency is used
grep -r "require('vulnerable-package')" src/
grep -r "from vulnerable_package import" src/

# Check if specific vulnerable function is called
grep -r "vulnerableFunction(" src/

For container/IaC findings:

  1. Check if the configuration is intentional. Some "insecure" configs are valid for your use case.
  2. Verify the actual risk. Is the resource exposed? Is there compensating control?
  3. Check runtime vs build. Some findings only matter in specific environments.

Verification outcome:

ResultAction
Confirmed vulnerabilityContinue to Step 3
False positiveDocument and suppress (Step 6)
UncertainGet a second opinion, default to treating as real

Step 3: assess exploitability (15-45 minutes)

A confirmed vulnerability isn't necessarily exploitable in your context. Assess real-world risk.

Exploitability factors (based on CVSS v4.0):

FactorQuestions to answer
Attack VectorNetwork accessible? Local only? Physical access needed?
Attack ComplexityEasy to exploit? Requires specific conditions?
Privileges RequiredAnonymous? Authenticated user? Admin?
User InteractionNone? Victim must click link?

Context-specific assessment:

FactorQuestions to answer
ExposureIs this code/component reachable from the internet?
Data sensitivityWhat data could be accessed if exploited?
Existing controlsWAF, rate limiting, authentication, network segmentation?
Exploit availabilityPublic exploit? Metasploit module? Theoretical only?

Quick exploitability check:

# Check if a public exploit exists
searchsploit CVE-2024-XXXXX

# Check ExploitDB
curl -s "https://www.exploit-db.com/search?cve=2024-XXXXX"

# Check CISA KEV (Known Exploited Vulnerabilities)
curl -s https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json | \
jq '.vulnerabilities[] | select(.cveID=="CVE-2024-XXXXX")'

Exploitability matrix:

Exploit availableExternally reachableSensitive data at riskPriority
YesYesYesCRITICAL — Immediate action
YesYesNoHIGH — Fix this sprint
YesNoYesHIGH — Fix this sprint
NoYesYesMEDIUM — Fix soon
NoNoNoLOW — Track, fix when convenient

Step 4: determine business impact (10-20 minutes)

Technical severity ≠ business impact. A critical CVE in a dev tool matters less than a medium issue in your payment system.

Impact assessment questions:

CategoryQuestions
ConfidentialityWhat data could leak? PII? Credentials? Business secrets?
IntegrityCould data be modified? Financial records? User permissions?
AvailabilityCould the system be crashed? For how long?
ComplianceDoes this violate regulations? GDPR, PCI DSS, HIPAA?
ReputationWould this make headlines if exploited?

Calculate priority (SSVC-inspired):

Score each factor and sum the result:

  • Exploitation — Active in the wild: +2 / PoC exists: +1 / None known: +0
  • Exposure — Internet-facing unauthenticated: +2 / Authenticated: +1 / Internal only: +0
  • Impact — Full system compromise: +2 / Data access or modification: +1 / Limited: +0

Total: 5–6 → Immediate | 3–4 → Urgent | 1–2 → Scheduled

Step 5: check production exposure (parallel track)

If the vulnerable code/component is in production, you need to know immediately.

Questions to answer:

  1. Is the vulnerability in production?

    • Check deployed versions
    • Check release history
    • When was it introduced?
  2. Has it been exploited?

    • Check application logs for suspicious patterns
    • Check WAF logs for attack signatures
    • Check access logs for anomalies
    • Check SIEM alerts
  3. What's the blast radius?

    • Which environments are affected?
    • Which customers/users?
    • How long has it been exposed?

Production investigation checklist:

# Check if vulnerable version is deployed
kubectl get deployments -o jsonpath='{.items[*].spec.template.spec.containers[*].image}'

# Search logs for exploit patterns (example for SQL injection)
grep -E "(UNION|SELECT.*FROM|DROP TABLE|--|;)" /var/log/app/*.log

# Check for unusual error rates
grep "500\|502\|503" /var/log/nginx/access.log | wc -l

# Check authentication failures spike
grep "authentication failed" /var/log/app/*.log | \
awk '{print $1}' | sort | uniq -c | sort -rn | head -20

If exploitation is suspected:

  1. Contain immediately — Disable affected feature, block IPs, rotate credentials
  2. Preserve evidence — Don't delete logs, take snapshots
  3. Engage incident response — This is now a security incident, not just a vulnerability
  4. Notify stakeholders — Leadership, legal, compliance as required

Step 6: decide and act

Based on your assessment, choose the appropriate response:

Response options:

PriorityResponseTimelineWho decides
ImmediateBlock merge, hotfix productionHoursSecurity Champion + Lead
UrgentFix in current sprint, production patch scheduledDaysSecurity Champion
ScheduledAdd to backlog, fix in normal developmentWeeksSecurity Champion
Accept riskDocument decision, implement monitoringN/ASecurity Champion + Management
False positiveSuppress with documentationImmediateSecurity Champion

For confirmed vulnerabilities — fix workflow:

┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ Create │────►│ Implement │────►│ Verify │────►│ Deploy │
│ ticket │ │ fix │ │ fix │ │ + close │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
│ │ │ │
▼ ▼ ▼ ▼
- CVE/CWE ID - Code change - Run scanner - Deploy to prod
- Severity - Dependency - Manual verify - Monitor for issues
- Impact analysis update - Regression - Update SBOM
- Affected versions - Config fix tests - Close ticket

For false positives — suppression workflow:

# 1. Document WHY it's a false positive
# semgrep: .semgrepignore or inline
def process_data(data):
# nosemgrep: sql-injection
# Reason: 'data' is validated UUID from internal service, not user input
query = f"SELECT * FROM cache WHERE id = '{data}'"

# 2. Add to team knowledge base
# Create entry in security wiki explaining this pattern

# 3. Consider improving the rule
# Report to Semgrep community if rule has high false positive rate

For accepted risks:

## Risk Acceptance Record

**Vulnerability:** CVE-2024-XXXXX in package-name v1.2.3
**Date:** 2024-12-28
**Decided by:** [Name], approved by [Manager]

### Risk assessment
- Severity: HIGH (CVSS 8.1)
- Exploitability in our context: LOW
- Reason: Vulnerable function not used in our codebase

### Mitigating controls
- [ ] WAF rule blocks attack pattern
- [ ] Input validation in place
- [ ] Monitoring for exploitation attempts

### Review schedule
- Next review: 2025-03-28
- Trigger for re-review: New exploit published, or mitigating control removed

### Acceptance signature
Accepted by: _____________ Date: _____________

Step 7: prevent recurrence

After fixing, prevent the same issue from appearing again.

Actions:

  1. Add specific scanner rules — If custom code pattern, add Semgrep rule
  2. Update dependencies policy — Block vulnerable versions in lockfile
  3. Add to security training — Share as case study with team
  4. Improve detection — If caught late, improve scanning coverage

Example: prevent future use of vulnerable pattern:

# .semgrep.yml - custom rule to prevent pattern
rules:
- id: no-string-format-sql
patterns:
- pattern: f"SELECT ... {$VAR} ..."
message: "Use parameterized queries, not string formatting"
languages: [python]
severity: ERROR

Response time guidelines

PriorityInitial responseFix implementedProduction patched
Immediate (Active exploit, critical data)1 hour4 hours8 hours
Urgent (Public exploit, external exposure)4 hours24 hours48 hours
Scheduled (No exploit, limited exposure)24 hours1-2 weeksNext release
Low (Theoretical, no exposure)1 week1-2 monthsWhen convenient

Escalation matrix

SituationEscalate toMethod
Active exploitation in progressCTO, Incident Response, LegalPhone call
Critical vulnerability in productionEngineering Lead, CTOSlack + call
Unsure about severity/exploitabilitySenior engineer, external consultantSlack
Risk acceptance neededEngineering ManagerEmail with documentation
Vendor patch requiredVendor security contactEmail, support ticket

Documentation template

Use this template for every significant finding:

## Security Finding Report

### Summary
- **Finding ID:** [JIRA/ticket number]
- **Scanner:** [Semgrep/Snyk/Trivy/etc]
- **Rule/CVE:** [Rule ID or CVE number]
- **Severity (tool):** [Critical/High/Medium/Low]
- **Severity (assessed):** [After your analysis]
- **Status:** [New/Investigating/Confirmed/Fixed/False Positive/Accepted]

### Technical details
- **Location:** [file:line or package:version]
- **Introduced:** [commit/date/PR]
- **In production:** [Yes/No, since when]

### Analysis
- **Verification result:** [Confirmed/False positive]
- **Exploitability:** [High/Medium/Low/None]
- **Attack vector:** [Description]
- **Existing controls:** [WAF/validation/etc]

### Impact
- **Confidentiality:** [None/Low/High]
- **Integrity:** [None/Low/High]
- **Availability:** [None/Low/High]
- **Business impact:** [Description]

### Resolution
- **Action taken:** [Fix/Accept/Suppress]
- **Fix PR:** [Link]
- **Deployed:** [Date]
- **Verified:** [How]

### Prevention
- **New scanner rule:** [Yes/No, link]
- **Training update:** [Yes/No]
- **Process change:** [Description]

Common mistakes

Blocking everything

# BAD: Every finding blocks the build
- run: npm audit # Fails on ANY vulnerability

This leads to ignored pipelines, disabled checks, or "fix later" culture.

# GOOD: Block on critical, warn on medium
- run: npm audit --audit-level=critical

Not blocking anything

# BAD: Scan runs but never fails
- run: semgrep --config auto || true

Security findings pile up, nobody looks at reports.

# GOOD: Block on high severity
- run: semgrep --config auto --error --severity=ERROR

Running DAST on production

# BAD: Attacking your own production
- run: zap-full-scan.py -t https://production.example.com

Full DAST scans can break things. Use staging.

# GOOD: Attack staging only
- run: zap-full-scan.py -t $STAGING_URL

Ignoring tool output

Setting up scanning without a process for handling findings. Tools find issues → nobody triages → alerts get ignored → tools get disabled.

Fix: Assign ownership. Someone reviews findings weekly. Critical issues create tickets. Medium issues get reviewed monthly.

Too many tools

Running five different SAST tools, three SCA scanners, and hoping more is better. Result: noise, duplicate findings, slow pipelines.

Fix: Start with one tool per category. Add more only if you have specific gaps.

Scanning only on main

# BAD: Issues found after merge
only:
- main

Find issues in PRs, not after they're merged.

# GOOD: Scan on every PR
on:
push:
branches: [main]
pull_request: # This is key

Real-world incidents

Codecov supply chain attack (2021). Attackers compromised Codecov's bash uploader script used in thousands of CI/CD pipelines. For over two months, the modified script exfiltrated environment variables — including CI secrets, API tokens, and credentials — from every pipeline that used it. Companies like Twilio, HashiCorp, and Confluent were affected.

ua-parser-js hijack (2021). A popular npm package (8 million weekly downloads) was hijacked when an attacker compromised the maintainer's npm account. Malicious versions mined cryptocurrency and stole passwords. Projects with SCA scanning detected the issue within hours; those without it shipped compromised builds.

Log4Shell (2021). CVE-2021-44228 in Log4j affected virtually every Java application. Organizations with SCA scanning knew their exposure within hours. Those without spent days figuring out which applications used Log4j and which versions. The difference between "we're patched" and "we're still investigating" was automated dependency scanning.

These incidents share a pattern: organizations with automated scanning detected and responded faster. Those without it scrambled.

Workshop: secure your pipeline

Block 2-3 hours for this exercise.

Part 1: add SAST scanning (30 minutes)

For GitHub:

  1. Create .github/workflows/security.yml:
name: Security Scan

on:
push:
branches: [main]
pull_request:

jobs:
semgrep:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: semgrep/semgrep-action@v1
with:
config: p/security-audit p/secrets
  1. Commit and push
  2. Create a test PR and verify the scan runs
  3. Review any findings

For GitLab:

  1. Add to .gitlab-ci.yml:
semgrep:
stage: test
image: semgrep/semgrep
script:
- semgrep ci --config auto
  1. Commit and push
  2. Verify scan runs on pipeline
  3. Review findings in pipeline output

Deliverable: Working SAST scan in CI

Part 2: configure Dependabot or Snyk (30 minutes)

Option A — Dependabot (GitHub):

  1. Create .github/dependabot.yml:
version: 2
updates:
- package-ecosystem: "npm" # or pip, bundler, etc.
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 5
  1. Go to Settings → Security → Enable Dependabot alerts and security updates
  2. Review any existing alerts in Security tab

Option B — Snyk:

  1. Sign up at snyk.io (free tier)
  2. Install CLI: npm install -g snyk
  3. Authenticate: snyk auth
  4. Run initial scan: snyk test
  5. Add to CI (see examples above)

Deliverable: Dependency scanning enabled with initial findings reviewed

Part 3: add container scanning (30 minutes)

If you use Docker:

  1. Add Trivy to your workflow:
container-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker build -t myapp:test .
- uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:test
severity: 'CRITICAL,HIGH'
  1. Build your image locally and scan:
docker build -t myapp:test .
trivy image myapp:test
  1. Review findings and fix critical issues

Deliverable: Container scanning in CI with base image vulnerabilities documented

Part 4: set up DAST baseline (30 minutes)

  1. Deploy your app to a staging environment (or run locally)

  2. Run ZAP baseline scan:

docker run -t ghcr.io/zaproxy/zaproxy:stable zap-baseline.py \
-t http://your-staging-url
  1. Review the output — check for:

    • Missing security headers
    • Cookie security issues
    • Information disclosure
  2. Add baseline scan to CI pipeline (staging only)

Deliverable: DAST baseline scan configured for staging

Part 5: add IaC scanning (20 minutes)

If you use Terraform, CloudFormation, or Kubernetes:

  1. Install Checkov:
pip install checkov
  1. Scan your infrastructure code:
checkov -d terraform/  # or cloudformation/, k8s/
  1. Add to CI:
# GitHub Actions
- name: Checkov
uses: bridgecrewio/checkov-action@master
with:
directory: terraform/
soft_fail: false
  1. Review findings and fix critical misconfigurations

Deliverable: IaC scanning in CI with major issues documented

Part 6: set up SBOM generation (20 minutes)

  1. Install Syft:
brew install syft
# or
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
  1. Generate SBOM for your project:
syft dir:. -o cyclonedx-json > sbom.json
  1. Scan SBOM for vulnerabilities:
brew install grype
grype sbom:sbom.json
  1. Add to CI:
# GitHub Actions
- name: Generate SBOM
uses: anchore/sbom-action@v0
with:
path: .
format: cyclonedx-json
output-file: sbom.json

- name: Scan SBOM
uses: anchore/scan-action@v3
with:
sbom: sbom.json
fail-build: true
severity-cutoff: high
  1. Review license information in the SBOM

Deliverable: SBOM generation and scanning in CI

Part 7: create triage process (30 minutes)

Create a simple document:

# Security Findings Triage Process

## Severity levels
- **Critical/High**: Block merge, fix immediately
- **Medium**: Fix in current sprint
- **Low**: Add to backlog, fix when convenient

## Who triages
- [Name] reviews findings weekly
- Critical findings notify #security-alerts channel

## False positives
- Document in `.semgrep.yml` or tool config
- Require approval from [Name] to suppress

## Metrics tracked
- Open findings by severity
- Time from finding to fix
- Findings per deployment

Deliverable: Documented triage process

Artifacts from this chapter

By the end of this chapter, you should have:

  1. SAST scanner configured — Semgrep or CodeQL running on every PR
  2. Dependabot/Snyk enabled — Dependency scanning with alerts configured
  3. Container scanning — Trivy scanning Docker images (if applicable)
  4. IaC scanning — Checkov or tfsec scanning infrastructure code (if applicable)
  5. SBOM generation — Syft generating SBOMs with each build
  6. DAST baseline — ZAP or Nuclei running on staging deploys
  7. Security workflow file — Complete CI/CD security pipeline configuration
  8. Finding response playbook — Step-by-step process for handling scanner findings
  9. Escalation matrix — Who to contact for different severity levels
  10. Triage process document — How findings are prioritized and assigned

Talking to leadership

When someone asks why you're adding security scanning to CI/CD:

"I'm adding automated security checks to our deployment pipeline. These tools scan our code for vulnerabilities, check our dependencies for known security issues, and test the running application for common attack vectors. It catches problems before they reach production — SQL injection, vulnerable libraries, misconfigured servers. The tools are free and add maybe 2-3 minutes to our pipeline. The alternative is finding these issues after a breach, which is significantly more expensive in terms of both money and reputation."

Short version: "I'm adding automated security checks so we catch vulnerabilities before attackers do."

Self-check

Static analysis (SAST)

  • Semgrep or CodeQL running on every PR
  • High-severity findings block merge
  • False positive suppression process documented
  • Team knows how to read SAST findings

Dependency scanning (SCA)

  • Dependabot or Snyk enabled
  • Alerts configured and being reviewed
  • Critical dependency vulnerabilities fixed
  • Process for handling unfixable vulnerabilities

Container scanning

  • Trivy scanning images before push to registry
  • Base images updated to fix critical vulnerabilities
  • Scanning integrated into build pipeline

Infrastructure as Code (IaC)

  • Checkov or tfsec running on infrastructure changes
  • Critical misconfigurations (public buckets, open security groups) blocked
  • Intentional exceptions documented with skip comments

SBOM (Software Bill of Materials)

  • SBOM generated for each release
  • SBOMs scanned for vulnerabilities
  • License policy defined (acceptable/blocked licenses)
  • SBOMs stored with release artifacts

Dynamic testing (DAST)

  • Baseline scan running on staging
  • Security headers checked
  • Full scan scheduled (weekly or pre-release)

Finding response process

  • Finding response playbook documented
  • Escalation matrix defined
  • Risk acceptance process established
  • Finding documentation template in use

Process

  • Someone owns security finding triage
  • Severity thresholds defined
  • Metrics being tracked

Check off at least 18 of 28 items before moving on.

Security scanning tools reference

This section provides a comprehensive list of security scanning tools, organized from free/simple to enterprise/complex. Start with the free tier — it covers most needs. Move to paid tools when you need specific features, better support, or enterprise compliance.

SAST tools (static code analysis)

ToolPriceLanguagesBest forLink
SemgrepFree (open source)30+ languagesGeneral-purpose, custom rulessemgrep.dev
BanditFree (open source)PythonPython-specific securityGitHub
gosecFree (open source)GoGo-specific securityGitHub
BrakemanFree (open source)Ruby/RailsRails applicationsbrakemanscanner.org
ESLint securityFree (open source)JavaScriptJS/Node.js security rulesGitHub
PsalmFree (open source)PHPPHP type checking + securitypsalm.dev
PHPStanFree (open source)PHPPHP static analysisphpstan.org
CodeQLFree for public repos10+ languagesDeep semantic analysisGitHub
SonarQube CommunityFree (self-hosted)30+ languagesCode quality + securitysonarsource.com
Semgrep ProPaid30+ languagesAdvanced rules, team featuressemgrep.dev
SonarCloudFree for public, paid for private30+ languagesCloud-hosted SonarQubesonarcloud.io
Snyk CodeFree tier, paid plans10+ languagesAI-powered, fastsnyk.io
CheckmarxEnterprise30+ languagesEnterprise compliancecheckmarx.com
VeracodeEnterprise30+ languagesEnterprise, binary analysisveracode.com
FortifyEnterprise30+ languagesEnterprise, on-premisemicrofocus.com

Recommendation for small teams: Start with Semgrep (free, fast, good rules). Add CodeQL if you're on GitHub. Consider Snyk Code if you need more coverage.

SCA tools (dependency scanning)

ToolPriceEcosystemsBest forLink
npm auditFree (built-in)npmNode.js projectsdocs.npmjs.com
pip-auditFree (open source)pipPython projectsGitHub
bundle-auditFree (open source)bundlerRuby projectsGitHub
govulncheckFree (official)Go modulesGo projectspkg.go.dev
composer auditFree (built-in)ComposerPHP projectsgetcomposer.org
DependabotFree (GitHub built-in)15+ ecosystemsGitHub reposGitHub
TrivyFree (open source)All major + containersUniversal scannertrivy.dev
GrypeFree (open source)All major + containersFast, SBOM supportGitHub
OSV-ScannerFree (Google)All majorGoogle's OSV databaseGitHub
OWASP Dependency-CheckFree (open source)Java, .NET, JS, RubyOWASP projectowasp.org
RenovateFree (open source)50+ ecosystemsDependency updatesGitHub
Snyk Open SourceFree tier, paid plansAll majorBest fix suggestionssnyk.io
Mend (WhiteSource)PaidAll majorLicense compliancemend.io
Black DuckEnterpriseAll majorEnterprise compliancesynopsys.com
JFrog XrayPaidAll majorArtifact repository integrationjfrog.com

Recommendation for small teams: Use Dependabot (free on GitHub) or Renovate. Add Trivy for containers. Snyk free tier is great if you need more visibility.

DAST tools (dynamic testing)

ToolPriceTypeBest forLink
OWASP ZAPFree (open source)Full DASTGeneral web app testingzaproxy.org
NucleiFree (open source)Template-basedFast, specific CVE checksnuclei.projectdiscovery.io
NiktoFree (open source)Web server scannerServer misconfigurationsGitHub
wapitiFree (open source)Web app scannerBlack-box testingGitHub
ArachniFree (open source)Full DASTFeature-rich scannerGitHub
sqlmapFree (open source)SQL injectionSQL injection testingsqlmap.org
StackHawkFree tier, paid plansAPI-focusedModern APIs, CI/CD nativestackhawk.com
Burp Suite CommunityFreeManual testingManual security testingportswigger.net
Burp Suite Pro$449/yearFull DASTProfessional pen testingportswigger.net
AcunetixPaidFull DASTAutomated web scanningacunetix.com
Invicti (Netsparker)EnterpriseFull DASTEnterprise web appsinvicti.com
Qualys WASEnterpriseFull DASTEnterprise, compliancequalys.com

Recommendation for small teams: Start with ZAP baseline scans in CI. Add Nuclei for CVE-specific checks. Burp Suite Community for manual testing.

Container and image scanning

ToolPriceFeaturesBest forLink
TrivyFree (open source)Images, IaC, SBOMUniversal scannertrivy.dev
GrypeFree (open source)Images, SBOMFast, Anchore ecosystemGitHub
ClairFree (open source)ImagesContainer registriesGitHub
SyftFree (open source)SBOM generationSoftware Bill of MaterialsGitHub
Docker ScoutFree tierImagesDocker Hub integrationdocker.com
Snyk ContainerFree tier, paid plansImages, KubernetesFix recommendationssnyk.io
Anchore EnterprisePaidImages, policiesEnterprise complianceanchore.com
Sysdig SecurePaidRuntime + scanningRuntime securitysysdig.com
Prisma CloudEnterpriseFull CNAPPCloud-native securitypaloaltonetworks.com

Recommendation for small teams: Trivy does everything you need for free. Add Docker Scout if you're on Docker Hub.

Infrastructure as Code (IaC) scanning

ToolPriceSupported IaCBest forLink
CheckovFree (open source)Terraform, CloudFormation, K8s, ARMComprehensive IaC scanningcheckov.io
tfsecFree (open source)TerraformTerraform-focusedGitHub
TerrascanFree (open source)Terraform, K8s, Helm, DockerfilePolicy as codeGitHub
KICSFree (open source)15+ IaC platformsBroad coveragekics.io
cfn-lintFree (AWS official)CloudFormationAWS CloudFormationGitHub
cfn_nagFree (open source)CloudFormationCloudFormation securityGitHub
kubesecFree (open source)KubernetesK8s manifest securitykubesec.io
TrivyFree (open source)Terraform, K8s, DockerfileUniversal (IaC + containers)trivy.dev
Snyk IaCFree tier, paid plansTerraform, K8s, CloudFormationFix recommendationssnyk.io
BridgecrewPaid (now Prisma Cloud)All majorEnterprise, Checkov backendpaloaltonetworks.com

Recommendation for small teams: Start with Checkov — it's comprehensive and free. tfsec is great if you only use Terraform.

IaC scanning in CI:

# GitHub Actions - Checkov
- name: Checkov
uses: bridgecrewio/checkov-action@master
with:
directory: terraform/
framework: terraform
soft_fail: false

# GitLab CI - tfsec
tfsec:
stage: test
image: aquasec/tfsec:latest
script:
- tfsec terraform/ --severity-override=HIGH,CRITICAL

SBOM generation and analysis

ToolPriceTypeBest forLink
SyftFree (open source)SBOM generationUniversal generatorGitHub
cdxgenFree (open source)SBOM generationCycloneDX formatGitHub
TrivyFree (open source)SBOM + scanningAll-in-onetrivy.dev
GrypeFree (open source)SBOM vulnerability scanFast scanningGitHub
OSV-ScannerFree (Google)SBOM vulnerability scanGoogle's databaseGitHub
OWASP Dependency-TrackFree (self-hosted)SBOM platformFull managementdependencytrack.org
FOSSAFree tier, paid plansLicense complianceDeep license analysisfossa.com
Anchore EnterprisePaidSBOM platformEnterprise policiesanchore.com
Sonatype NexusPaidSBOM + complianceRepository integrationsonatype.com

Recommendation for small teams: Use Syft to generate SBOMs, Grype to scan for vulnerabilities. Add OWASP Dependency-Track if you need centralized tracking.

Secret scanning

ToolPriceFeaturesBest forLink
gitleaksFree (open source)Git history, pre-commitFast, configurablegitleaks.io
truffleHogFree (open source)Git, S3, GCS, verified secretsDeep scanningGitHub
git-secretsFree (AWS)Pre-commit hooksAWS credentialsGitHub
detect-secretsFree (open source)Baseline approachYelp's toolGitHub
GitHub Secret ScanningFree (GitHub built-in)Push protectionGitHub reposdocs.github.com
GitLab Secret DetectionFree (GitLab built-in)Pipeline integrationGitLab reposdocs.gitlab.com
GitGuardianFree for individualsReal-time monitoringDeveloper-friendlygitguardian.com
SnykFree tierPart of Snyk CodeUnified platformsnyk.io

Recommendation for small teams: Enable GitHub/GitLab built-in scanning. Add gitleaks as pre-commit hook. See the Secrets management chapter for detailed setup.

API security testing

ToolPriceTypeBest forLink
OWASP ZAPFree (open source)OpenAPI/Swagger scanningREST APIszaproxy.org
NucleiFree (open source)Template-basedAPI CVE checksnuclei.projectdiscovery.io
PostmanFree tierAPI testing platformManual + automatedpostman.com
StackHawkFree tier, paid plansAPI-first DASTModern APIs, GraphQLstackhawk.com
AktoFree tierAPI discovery + testingAPI inventoryakto.io
42CrunchPaidOpenAPI securityAPI security platform42crunch.com
Salt SecurityEnterpriseRuntime API protectionAPI threat detectionsalt.security
Noname SecurityEnterpriseFull API securityEnterprise APIsnonamesecurity.com

Recommendation for small teams: Use ZAP with OpenAPI import. Add StackHawk if you need better API support.

All-in-one platforms

These platforms combine multiple scanning types in one solution:

PlatformIncludesPriceBest forLink
GitHub Advanced SecuritySAST (CodeQL), SCA, Secrets$49/user/monthGitHub-native teamsgithub.com
GitLab UltimateSAST, SCA, DAST, Secrets, Container$99/user/monthGitLab-native teamsgitlab.com
SnykSAST, SCA, Container, IaCFree tier, paid plansDeveloper-friendlysnyk.io
SonarSAST, code qualityFree tier (Cloud), paidCode quality focussonarsource.com
VeracodeSAST, SCA, DASTEnterpriseLarge enterprisesveracode.com
Checkmarx OneSAST, SCA, DAST, IaCEnterpriseEnterprise compliancecheckmarx.com
SynopsysSAST, SCA, DASTEnterpriseEnterprise, legacysynopsys.com

Recommendation for small teams: If you're on GitHub, enable free features first, then consider GitHub Advanced Security. Snyk's free tier is generous and covers most needs. Avoid enterprise platforms until you actually need enterprise features.

Quick start: minimal viable security pipeline

If you're just starting, here's the minimum setup using only free tools:

CategoryToolWhy
SASTSemgrepFast, good rules, free
SCADependabot or TrivyBuilt-in or universal
SecretsgitleaksPre-commit + CI
ContainersTrivyScans everything
IaCCheckovComprehensive, free
SBOMSyft + GrypeGenerate + scan
DASTZAP BaselineStandard, reliable

Total cost: $0. Setup time: 3-4 hours.

Further reading

What's next

You've integrated security scanning into your CI/CD pipeline. Next chapter: container and cloud infrastructure security — securing Docker images, cloud configurations, and Infrastructure as Code.