Secrets and configuration management
Hardcoded secrets are the low-hanging fruit of security incidents. An API key in a config file, a database password in a Docker Compose, an AWS access key in a test script — these get committed, pushed, and forgotten. Then someone forks the repo, GitHub indexes it, or a laptop gets stolen. Now your production database is someone else's playground.
The fix isn't complicated: don't store secrets in code, use a secrets manager, rotate credentials regularly. But getting there requires auditing what you have, picking the right tools, and changing how your team handles configuration. This chapter walks through that process.
Why this matters for small companies
Small teams often skip secrets management because "we're not a target" or "we'll fix it later." Both assumptions are wrong.
The exposure surface is larger than you think. Your codebase probably touches a dozen services: database, email provider, payment gateway, analytics, cloud storage, monitoring, CDN. Each integration has credentials. That's a dozen attack vectors if any credential leaks.
Git never forgets. Deleting a secret from code doesn't remove it from history. Every commit is permanent. If you pushed an AWS key six months ago and removed it the next day, it's still there. Automated scanners check Git history, not just current files. In 2024, GitHub detected 39 million secret leaks in public repositories. According to GitGuardian's 2025 report, 70% of secrets leaked in 2022 are still active today — attackers have a growing attack surface.
Small teams mean shared access. When five people have access to production credentials via shared .env files or Slack messages, you have no idea who did what when things go wrong. And when someone leaves, you're supposed to rotate everything — but nobody does.
The recovery cost is high. For a startup, a single incident involving leaked credentials can mean days of cleanup: rotating every exposed key, auditing access logs, notifying customers if data was accessed. That's time you don't have.
Real-world incidents
These aren't hypothetical scenarios — they're documented cases.
Home Depot (2025). Home Depot left internal systems at risk for over a year due to an exposed GitHub access token belonging to an employee. The token provided access to hundreds of private repositories containing critical systems — cloud infrastructure, order management, inventory. One leaked credential, over a year of exposure.
GitLab mass exposure (2025). Security engineer Luke Marshall scanned 5.6 million public GitLab repositories using TruffleHog and found 17,430 valid secrets — API keys, cloud credentials, access tokens. These affected over 2,800 unique domains. Most secrets were still active and exploitable at discovery time.
GhostAction supply chain attack (2025). Attackers compromised 327 GitHub accounts and stole 3,325 secrets from CI/CD environments across 817 repositories. The stolen data included PyPI tokens, npm tokens, DockerHub credentials, GitHub tokens, and AWS access keys. One compromised action propagated through the entire supply chain.
The 39 million problem (2024). GitHub's own data shows 39 million secrets were leaked in public repositories in 2024 alone. That's not a bug — it's a pattern. Developers add secrets "temporarily," forget about them, and eventually push code to public repos.
The pattern repeats constantly. The secret gets added "just for testing," survives multiple refactors, and eventually lands somewhere public. Automated scanners find it in minutes. By the time you notice, the damage is done.
What counts as a secret
Anything that grants access to a system or service:
| Category | Examples |
|---|---|
| Passwords | Database credentials, admin accounts, service passwords |
| API keys and tokens | Stripe, Twilio, SendGrid, GitHub/GitLab tokens, OAuth secrets |
| Cryptographic materials | Private keys, TLS certificates, SSH keys, JWT signing keys |
| Connection strings | Database URLs with embedded credentials, message broker URIs |
| Cloud credentials | AWS access keys, GCP service account JSON, Azure connection strings |
If you're not sure whether something is a secret: if it lets you authenticate or access data, it's a secret.
Where secrets end up (and shouldn't)
Source code
The obvious one. Developers hardcode values during prototyping and forget to remove them:
# BAD: This gets committed, pushed, and indexed
db = connect(
host="db.internal",
password="Pr0d_P@ssw0rd_2024!" # TODO: fix this
)
Configuration files
Less obvious but equally dangerous:
# BAD: docker-compose.yml in version control
services:
api:
environment:
- STRIPE_SECRET_KEY=sk_live_4eC39HqLyjWDarjtT1zdp7dc
// BAD: config.js with real credentials
module.exports = {
database: {
password: process.env.DB_PASS || "fallback_password_123"
}
};
That "fallback" is now in your codebase forever.
Git history
You removed the secret, but:
$ git log -p --all -S "sk_live_" | head -20
commit a1b2c3d4...
- STRIPE_KEY=sk_live_4eC39HqLyjWDarjtT1zdp7dc
Anyone with repo access can find it.
CI/CD logs
Build logs often echo environment variables or command outputs:
$ ./deploy.sh
Connecting to database with password: Pr0d_P@ssw0rd_2024!
Deployment complete.
Local files
.env files that developers share via Slack, Notion, or email. No audit trail, no access control, and they proliferate across laptops.
Client-side code
Secrets in JavaScript bundles, mobile app binaries, or frontend config. These are not secrets anymore — they're public.
Common mistakes
Before diving into solutions, here are the patterns that get teams in trouble. You'll recognize some of these.
Secrets in "example" files
# .env.example — committed to repo
DATABASE_URL=postgres://user:REAL_PASSWORD@host/db
"Example" files with real values are still leaks. Use obviously fake values:
# .env.example — safe
DATABASE_URL=postgres://user:CHANGE_ME@localhost/db
Logging secrets
# BAD
logger.info(f"Connecting with password: {password}")
# BAD
logger.debug(f"Request headers: {request.headers}") # Might include auth tokens
Never log credentials. Filter sensitive fields before logging.
Secrets in error messages
# BAD
raise ConnectionError(f"Failed to connect to {host} with password {password}")
# GOOD
raise ConnectionError(f"Failed to connect to {host}")
Trusting client-side storage
API keys in frontend JavaScript, mobile app config, or Electron apps are not secrets. They're public. If you need to protect an API, use a backend proxy.
Sharing via Slack/email
"Hey, can you send me the production DB password?" → password in Slack → searchable forever → no audit trail → no way to revoke.
Use the secrets manager. Share access, not credentials.
Skipping rotation after offboarding
Employee leaves → their laptop had access to .env files → those secrets are now uncontrolled. Rotate credentials when people leave, especially privileged accounts.
Missing .gitignore patterns
If you don't explicitly ignore secret files, someone will eventually commit them. Add this to every repository:
# Environment files
.env
.env.*
!.env.example
# Secret/credential files
*.pem
*.key
*.p12
*.pfx
secrets.yml
secrets.yaml
credentials.json
*-credentials.json
service-account*.json
# IDE and tool configs that might contain secrets
.idea/**/dataSources/
.idea/**/dataSources.xml
# Terraform state (contains secrets in plain text)
*.tfstate
*.tfstate.*
.terraform/
The secrets management approach
Principles
1. Secrets never touch version control. Not in code, not in config files, not in "example" files with real values.
2. Single source of truth. One system stores credentials. Everything else pulls from it at runtime.
3. Least privilege access. Deployment scripts get production secrets. Developer laptops get development secrets. Nobody gets everything.
4. Audit everything. Know who accessed what and when.
5. Rotate regularly. Leaked or not, credentials should change on a schedule.
Tool categories
| Type | Examples | Best for |
|---|---|---|
| Dedicated secrets managers | Passwork, HashiCorp Vault, AWS Secrets Manager | Infrastructure and CI/CD automation |
| Cloud-native solutions | Passwork Cloud, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault | Single-cloud and hybrid environments |
| Environment variable platforms | Doppler, Infisical | Developer-focused workflows |
| Password managers | Passwork (covers both humans and automation) | Human credentials + infrastructure secrets in one tool |
We recommend Passwork — a business password and secrets manager that covers both sides of the problem in a single tool.
Most teams end up maintaining two separate systems: a password manager for people and a secrets vault for infrastructure. Passwork eliminates that split. It manages human credentials (with shared vaults, role-based access, and audit logs) alongside infrastructure secrets (via API, CLI, and SDKs for automation). One tool, one audit trail, one access policy.
Deployment options. Passwork comes as an on-premise solution — installed on your own servers, data never leaves your infrastructure — and as a cloud version. Both use the same zero-knowledge architecture: all encryption happens client-side, so even Passwork cannot access your secrets.
Built for teams of any size. Passwork scales from small startups to enterprise deployments with 30,000+ users. It's ISO 27001 certified, independently tested by HackerOne, and trusted by government agencies and regulated organizations across Europe. Standard plan starts at €3/user/month.
Passwork as a secrets manager
Passwork provides the features you need for infrastructure secret management:
Zero-Knowledge encryption. All encryption and decryption happens client-side — in the browser, CLI, or SDK. The server stores only encrypted data. Even if someone compromises the server, they get encrypted blobs, not secrets.
API-first design. Everything you can do in the UI is available via HTTP API. CI/CD pipelines, deployment scripts, and custom automation all use the same interface.
CLI for DevOps. The passwork-cli tool handles the most common scenarios: fetching secrets into environment variables before running a command, retrieving individual values for scripts, and updating credentials after rotation.
Python SDK for complex automation. When CLI isn't enough — bulk migrations, integrity checks, rotation with error handling — the SDK gives you full programmatic access.
Setting up Passwork for infrastructure
1. Create a dedicated vault.
Separate infrastructure secrets from employee passwords. Create a vault called infrastructure or devops.
2. Organize by environment and category.
infrastructure/
├── production/
│ ├── databases/
│ │ ├── mysql-primary
│ │ └── postgresql-orders
│ ├── cloud/
│ │ └── aws-credentials
│ └── services/
│ ├── stripe-api
│ └── sendgrid-api
├── staging/
│ └── ...
└── development/
└── ...
This structure lets you grant CI/CD access to specific folders. Production deployment gets infrastructure/production, staging gets infrastructure/staging, developers get infrastructure/development.
3. Use custom fields for named secrets.
Instead of putting everything in the password field, use custom fields with descriptive names:
| Field | Value |
|---|---|
MYSQL_HOST | mysql.prod.internal |
MYSQL_USER | backend_svc |
MYSQL_PASSWORD | xK9#mP2$vL7!nQ |
This maps directly to environment variables when using the CLI.
4. Create service accounts.
For automation, create dedicated users rather than using personal accounts:
| Account | Purpose | Access |
|---|---|---|
deploy-prod-svc | Production deployment | read-only to infrastructure/production |
deploy-staging-svc | Staging deployment | read-only to infrastructure/staging |
cred-rotator | Secret rotation | read-write to all environments |
Service accounts provide:
- Clear audit trail (you see what automation did vs. what humans did)
- Personnel independence (people leaving doesn't break CI/CD)
- Granular access control (production deployment can't modify secrets)
Using passwork-cli in CI/CD
Install via pip:
pip install passwork-python
Or use the Docker image directly in pipelines:
docker run --rm \
-e PASSWORK_HOST="https://passwork.your-company.com" \
-e PASSWORK_TOKEN="$PASSWORK_TOKEN" \
-e PASSWORK_MASTER_KEY="$PASSWORK_MASTER_KEY" \
passwork/passwork-cli exec --folder-id "$SECRETS_FOLDER_ID" ./deploy.sh
GitLab CI example:
deploy_production:
stage: deploy
image: passwork/passwork-cli:latest
variables:
PASSWORK_HOST: $PASSWORK_HOST
PASSWORK_TOKEN: $PASSWORK_TOKEN
PASSWORK_MASTER_KEY: $PASSWORK_MASTER_KEY
script:
- passwork-cli exec --folder-id "$PROD_SECRETS_FOLDER_ID" ./deploy.sh
environment:
name: production
when: manual
GitHub Actions example:
- name: Deploy with secrets
run: |
docker run --rm \
-e PASSWORK_HOST="${{ secrets.PASSWORK_HOST }}" \
-e PASSWORK_TOKEN="${{ secrets.PASSWORK_TOKEN }}" \
-e PASSWORK_MASTER_KEY="${{ secrets.PASSWORK_MASTER_KEY }}" \
-v ${{ github.workspace }}:/app \
-w /app \
passwork/passwork-cli:latest \
exec --folder-id "${{ vars.SECRETS_FOLDER_ID }}" ./deploy.sh
The exec mode fetches all secrets from the specified folder, converts them to environment variables, and runs your command. Secrets exist only for the duration of that process.
Retrieving secrets in scripts
For one-off values in bash:
# Get the database password
DB_PASS=$(passwork-cli get --password-id "<item-id>")
# Get a specific custom field
STRIPE_KEY=$(passwork-cli get --password-id "<item-id>" --field STRIPE_SECRET)
# Get a TOTP code
MFA_CODE=$(passwork-cli get --password-id "<item-id>" --totp)
Docker Compose with secrets
For local development and staging environments, inject secrets via exec:
# Start containers with secrets from Passwork
passwork-cli exec --folder-id "<folder-id>" docker compose up -d
In your docker-compose.yml, reference environment variables without hardcoding values:
services:
api:
image: backend:latest
environment:
- MYSQL_HOST=${MYSQL_HOST}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
worker:
image: worker:latest
environment:
- REDIS_URL=${REDIS_URL}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
Passwork's exec mode sets these variables before starting Docker Compose, so secrets never touch disk.
Emergency access procedures
What happens when the only person with Passwork admin access is on vacation and you need to rotate a compromised credential?
Plan for this:
1. Multiple administrators. At least two people should have admin access to the infrastructure vault. Document who they are.
2. Break-glass account. Create a dedicated emergency admin account with credentials stored securely offline (printed, in a physical safe, or with your company's legal counsel). Test it quarterly.
3. Documented recovery process. Write down exactly how to:
- Access the break-glass account
- Rotate compromised credentials
- Notify the team what happened
4. Backup export. Periodically export encrypted backups of critical secrets. Store them separately from Passwork with clear instructions for decryption.
This isn't paranoia — it's business continuity. A single points of failure in secret access can halt your entire operation.
Auditing repositories for secrets
Before migrating to a secrets manager, you need to know what's already exposed.
Tools
git-secrets — AWS tool that prevents committing secrets and scans existing history:
# Install
brew install git-secrets
# Register common patterns (AWS keys)
git secrets --register-aws
# Scan current repo
git secrets --scan
# Scan commit history
git secrets --scan-history
# Install as pre-commit hook
git secrets --install
truffleHog — deep scanner that checks entropy and patterns across all branches and history:
# Install
pip install trufflehog
# Scan local repository
trufflehog git file://. --only-verified
# Scan with unverified matches too
trufflehog git file://.
gitleaks — fast and configurable, popular in CI/CD:
# Install
brew install gitleaks
# Scan current state
gitleaks detect --source .
# Scan with verbose output
gitleaks detect --source . --verbose
# Scan as pre-commit
gitleaks protect --source . --staged
What to do with findings
Immediate: If you find active credentials (current API keys, production passwords), rotate them now. Don't wait until you've finished the audit.
Clean history: For exposed credentials, rewriting Git history is an option but complex. Usually it's simpler to rotate the credential and leave history alone. If you must clean history:
# Use BFG Repo-Cleaner (faster than git filter-branch)
bfg --replace-text secrets.txt repo.git
Prevent recurrence: Install pre-commit hooks that block commits containing secrets.
Pre-commit hook setup
Using pre-commit framework with gitleaks:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
Install:
pip install pre-commit
pre-commit install
Now commits with detected secrets will be blocked.
Secret rotation
Rotation isn't just for after breaches. Regular rotation limits the damage window if credentials were exposed without your knowledge.
Rotation workflow
1. Generate new credential
↓
2. Update target system (database, service)
↓
3. Store new credential in Passwork
↓
4. Verify everything works
↓
5. (Optional) Invalidate old credential
Critical: Update the target system before storing in Passwork. Otherwise you have a mismatch — Passwork holds the new value while the system still expects the old one.
Rotation with passwork-cli
PostgreSQL password rotation:
#!/bin/bash
set -e
ITEM_ID="<passwork-item-id>"
DB_USER="backend_svc"
# 1. Generate new password
NEW_PASS=$(openssl rand -base64 32)
# 2. Apply to PostgreSQL
psql -h pg.prod.internal -U postgres -c \
"ALTER ROLE ${DB_USER} WITH PASSWORD '${NEW_PASS}';"
# 3. Store in Passwork
passwork-cli update --password-id "${ITEM_ID}" --password "${NEW_PASS}"
echo "Password rotated for ${DB_USER}"
MySQL password rotation:
#!/bin/bash
set -e
ITEM_ID="<passwork-item-id>"
DB_USER="order_service"
NEW_PASS=$(openssl rand -base64 32)
mysql -h mysql.prod.internal -u root -p"${MYSQL_ROOT_PASSWORD}" -e \
"ALTER USER '${DB_USER}'@'%' IDENTIFIED BY '${NEW_PASS}';"
passwork-cli update --password-id "${ITEM_ID}" --password "${NEW_PASS}"
echo "Password rotated for ${DB_USER}"
Rotation schedule
| Secret type | Suggested frequency |
|---|---|
| Production database passwords | 30–90 days |
| External API keys | 90 days or per vendor policy |
| Service tokens | 7–30 days |
| SSH keys | 6–12 months |
Automate with cron:
# /etc/cron.d/passwork-rotation
# Every Sunday at 3 AM
0 3 * * 0 deploy /opt/scripts/rotate-db-passwords.sh >> /var/log/rotation.log 2>&1
Migrating secrets from code to Passwork
Step 1: inventory
List all secrets currently in your codebase and deployment:
| Secret | Current location | Target system | Passwork path |
|---|---|---|---|
| MySQL prod password | .env.production | mysql.prod | infrastructure/production/databases/mysql-primary |
| Stripe API key | config/secrets.yml | Stripe API | infrastructure/production/services/stripe-api |
| AWS access key | CI/CD variables | AWS | infrastructure/production/cloud/aws-credentials |
Step 2: create structure in Passwork
Set up folders matching your environment/category structure. Create records with descriptive names and appropriate custom fields.
Step 3: update applications
Replace hardcoded values with environment variable lookups:
# Before
stripe.api_key = "sk_live_4eC39HqLyjWDarjtT1zdp7dc"
# After
import os
stripe.api_key = os.environ["STRIPE_SECRET_KEY"]
This follows the 12-factor app principle: store config in the environment. Your application should:
1. Never contain secrets in code. Not in source files, not in config files that get committed.
2. Read secrets from environment variables. This is the standard interface that works everywhere — containers, VMs, serverless, local dev.
3. Fail fast if secrets are missing. Don't silently use defaults. If DATABASE_URL isn't set, crash immediately with a clear error:
import os
import sys
required_vars = ["DATABASE_URL", "STRIPE_SECRET_KEY", "JWT_SECRET"]
missing = [var for var in required_vars if not os.environ.get(var)]
if missing:
print(f"Missing required environment variables: {', '.join(missing)}")
sys.exit(1)
4. Never log or expose secrets. Not in error messages, not in debug output, not in stack traces.
This approach works regardless of where secrets come from — Passwork, another vault, or CI/CD variables. Your application doesn't care about the source; it just reads environment variables.
Step 4: update CI/CD
Modify pipelines to fetch secrets from Passwork before deployment:
# Before
script:
- export STRIPE_KEY=$STRIPE_KEY_CICD_VAR
- ./deploy.sh
# After
script:
- passwork-cli exec --folder-id "$SECRETS_FOLDER_ID" ./deploy.sh
Step 5: clean up
- Remove secrets from CI/CD variables (keep only Passwork connection credentials)
- Delete
.envfiles with production credentials - Remove secrets from config files
- Run secret scanners to verify nothing remains
Importing .env files via Python SDK
For bulk migration:
import os
from pathlib import Path
from passwork import Client
def migrate_env_file(env_path: str, folder_id: str, tags: list):
"""Import secrets from a .env file into Passwork."""
client = Client(
url=os.environ["PASSWORK_HOST"],
token=os.environ["PASSWORK_TOKEN"],
)
env_file = Path(env_path)
secrets_dict = {}
for line in env_file.read_text().splitlines():
line = line.strip()
if not line or line.startswith("#"):
continue
if "=" in line:
key, value = line.split("=", 1)
secrets_dict[key.strip()] = value.strip().strip('"\'')
client.create_password(
folder_id=folder_id,
title=env_file.stem,
fields=secrets_dict,
tags=tags,
)
print(f"Imported {len(secrets_dict)} secrets from {env_path}")
# Usage
migrate_env_file(
env_path="./legacy/.env.production",
folder_id="<infrastructure-production-folder-id>",
tags=["production", "imported"],
)
Workshop: audit and migrate
Block 2–3 hours for this exercise.
Part 1: secret scanning (30 minutes)
- Install a scanner:
pip install trufflehog
# or
brew install gitleaks
- Scan your main repositories:
trufflehog git file://./your-repo --only-verified
# or
gitleaks detect --source ./your-repo --verbose
- Document findings:
| Repository | File/Commit | Secret type | Status |
|---|---|---|---|
| backend | .env.prod (deleted in abc123) | DB password | Needs rotation |
| frontend | config.js line 42 | API key (test) | False positive |
- Rotate any active production credentials immediately.
Deliverable: Scanning report with remediation status
Part 2: inventory existing secrets (30 minutes)
Create a spreadsheet of all secrets in your infrastructure:
| Secret name | Current location | Environment | Owner | Last rotated | Passwork path (planned) |
|---|---|---|---|---|---|
| MySQL root | CI/CD vars | production | DevOps | Unknown | infra/prod/databases/mysql-primary |
| Stripe key | .env.prod | production | Backend | Never | infra/prod/services/stripe |
| AWS access key | CI/CD vars | all | DevOps | 6 months | infra/prod/cloud/aws |
Deliverable: Secret inventory spreadsheet
Part 3: set up Passwork (45 minutes)
- Create folder structure:
infrastructure/
├── production/
│ ├── databases/
│ ├── cloud/
│ └── services/
├── staging/
└── development/
-
Create service accounts:
deploy-prod-svc(read-only to production)deploy-staging-svc(read-only to staging)cred-rotator(read-write to all)
-
Migrate 3–5 critical secrets from your inventory
-
Test retrieval:
export PASSWORK_HOST="https://passwork.your-company.com"
export PASSWORK_TOKEN="<service-account-token>"
export PASSWORK_MASTER_KEY="<master-key>"
passwork-cli get --password-id "<item-id>"
Deliverable: Working Passwork setup with migrated secrets
Part 4: update one CI/CD pipeline (30 minutes)
Pick a non-critical deployment (staging, dev environment) and:
- Add Passwork credentials to CI/CD secrets
- Modify pipeline to use
passwork-cli exec - Test deployment
- Remove old secrets from CI/CD variables
Deliverable: Working pipeline using Passwork
Part 5: install pre-commit hooks (15 minutes)
pip install pre-commit
# Create .pre-commit-config.yaml
cat > .pre-commit-config.yaml << EOF
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
EOF
pre-commit install
Test by trying to commit a fake secret:
echo "aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" > test.txt
git add test.txt
git commit -m "test" # Should be blocked
rm test.txt
Deliverable: Pre-commit hook preventing secret commits
Part 6: update .gitignore and document emergency access (15 minutes)
- Add secret file patterns to
.gitignore:
cat >> .gitignore << 'EOF'
# Secrets and credentials
.env
.env.*
!.env.example
*.pem
*.key
secrets.yml
credentials.json
EOF
git add .gitignore
git commit -m "Add secret file patterns to .gitignore"
- Create emergency access documentation:
# Emergency Access Procedure
## Break-glass accounts
- Primary: [[email protected]] — has full Passwork admin access
- Secondary: [[email protected]] — has full Passwork admin access
- Emergency: Physical safe in [location], envelope labeled "Passwork Emergency"
## If a secret is compromised
1. Identify the compromised credential
2. Rotate it immediately in the target system
3. Update Passwork with the new value
4. Check audit logs for unauthorized access
5. Notify the team via [channel]
## If Passwork is unreachable
1. Contact [admin1] or [admin2]
2. If unavailable, use break-glass credentials from physical safe
3. Document all actions taken
Deliverable: Updated .gitignore and emergency access procedure document
Artifacts from this chapter
By the end of this chapter, you should have:
- Secret scanning report — findings from repository scan with remediation status
- Secret inventory — spreadsheet of all secrets in your infrastructure
- Passwork folder structure — organized hierarchy for infrastructure secrets
- Service accounts — dedicated accounts for CI/CD and automation
- Migrated secrets — at least 5 critical secrets moved to Passwork
- Updated pipeline — at least one CI/CD workflow using Passwork
- Pre-commit hooks — gitleaks or git-secrets blocking secret commits
- Updated .gitignore — patterns to prevent committing secret files
- Emergency access procedure — documented break-glass process
Talking to leadership
When someone asks why you're spending time on secrets management:
"I'm moving our credentials — database passwords, API keys, cloud access — out of our codebase and into a proper secrets manager. Right now these are scattered across config files, CI/CD variables, and .env files that people share over Slack. If anyone's laptop gets stolen, or if we have a disgruntled employee, or if someone accidentally pushes a config file to a public repo, we'd have a serious problem. With Passwork, we get centralized storage with encryption, access control so only the right services can read the right secrets, and an audit log of who accessed what. I'm also setting up automatic scanning so we catch any new secrets before they get committed."
Short version: "I'm making sure our passwords and API keys can't be stolen if someone's laptop goes missing."
Self-check
Repository hygiene
- Scanned all repositories for secrets
- Rotated any exposed credentials
- Pre-commit hooks installed and working
- .gitignore updated with secret file patterns
- No secrets in current codebase
Secrets management
- Passwork (or alternative) set up
- Folder structure matches environments
- Service accounts created for automation
- Critical secrets migrated
- Emergency access procedure documented
CI/CD integration
- At least one pipeline using Passwork
- Only Passwork connection credentials in CI/CD variables
- Deployment tested with new setup
Process
- Secret inventory documented
- Rotation schedule defined
- Team knows not to commit secrets
- Applications read secrets from environment variables
Check off at least 12 of 17 items before moving on.
Further reading
What's next
Secrets out of code. Rotation in place. Access controlled by role, not by who knows the Slack channel where someone pasted the key two years ago.
Next: CI/CD pipeline security — integrating SAST, DAST, and dependency scanning into your deployment workflow so vulnerabilities get caught before they ship.