Creating security policies and procedures
This chapter covers creating three essential policies: Acceptable Use Policy (what employees can and can't do), Incident Response Plan (what to do when something goes wrong), and Data Classification Policy (how to handle different types of data). You'll get templates you can adapt in an afternoon, not frameworks that require a committee.
Why policies matter for small companies
"We're only 20 people, everyone knows what to do" — this works until it doesn't. Policies matter because:
People leave and join. That tribal knowledge in everyone's head? It walks out the door when someone quits. New hires have no idea what's acceptable unless you write it down.
Incidents happen fast. When you're dealing with a breach at 2 AM, you don't have time to figure out who to call, what to preserve, or what to tell customers. You need a plan you can follow on autopilot.
Liability protection. If an employee does something stupid and you have no policy prohibiting it, good luck in court. Written policies establish expectations and provide legal cover.
Customer and partner requirements. Even small B2B companies get asked "do you have security policies?" by prospects. Having real policies (not just "yes we care about security") wins deals.
Insurance requirements. Cyber insurance increasingly requires documented policies. No policies = higher premiums or denied claims.
The goal isn't bureaucracy. It's having clear answers to common questions so you don't reinvent the wheel every time something comes up.
Policy writing principles
Before diving into specific policies, understand what makes a policy effective:
Keep it short
If your policy is longer than 3 pages, people won't read it. A policy nobody reads is worse than no policy — it creates false confidence.
Target lengths:
- Acceptable Use Policy: 2-3 pages
- Incident Response Plan: 3-5 pages
- Data Classification: 1-2 pages
Use plain language
Write for your actual employees, not lawyers or auditors. If you need a lawyer to interpret your policy, rewrite it.
BAD: "Employees shall refrain from utilizing corporate computing resources
for purposes not directly related to authorized business activities."
GOOD: "Don't use work computers for personal stuff that could get us in trouble:
pirated software, inappropriate content, or running a side business."
Be specific about consequences
Vague threats don't work. "Violations may result in disciplinary action" means nothing. Be concrete:
BAD: "Violations of this policy may result in disciplinary action."
GOOD: "First violation: Verbal warning and required security training.
Second violation: Written warning in personnel file.
Serious violations (data breach, malware infection): Immediate termination possible."
Include the "why"
People follow rules they understand. Explain the reasoning:
BAD: "Personal devices must not be connected to the corporate network."
GOOD: "Personal devices can't connect to our network because we can't verify
they're secure. Infected personal devices have caused major breaches
at other companies. Use guest WiFi for personal phones."
Make exceptions clear
Every rule has exceptions. Document them or people will make up their own:
**Policy:** Software must be approved before installation.
**Exception:** Development team members may install packages via npm/pip/brew
for their projects without approval. However, packages in production
deployments must be scanned for vulnerabilities.
Learn from companies that publish their policies
You don't have to start from scratch. Several tech companies publish their security policies and handbooks openly. Study these for structure, language, and scope — then adapt for your context.
Public policy examples
| Company | What they publish | Why it's useful | Link |
|---|---|---|---|
| GitLab | Complete security handbook: data classification, incident response, acceptable use, on-call procedures | Extremely detailed, covers edge cases, good for tech companies | handbook.gitlab.com/handbook/security |
| PagerDuty | Incident Response documentation (open source) | The gold standard for IR process, used by thousands of companies | response.pagerduty.com |
| Atlassian | Incident management handbook and playbooks | Great severity definitions and communication templates | atlassian.com/incident-management |
| Mattermost | Security policies in public handbook | Good example for smaller tech companies | handbook.mattermost.com |
| Basecamp | Employee handbook on GitHub | Practical, plain-language policies | github.com/basecamp/handbook |
Free policy templates
| Resource | What you get | Best for | Link |
|---|---|---|---|
| SANS Policy Templates | 60+ ready-to-use templates (AUP, IRP, Password Policy, Remote Access, etc.) | Quick start with industry-standard language | sans.org/information-security-policy |
| NIST Small Business Cybersecurity | Guides, checklists, and frameworks designed for SMB | Non-technical leadership buy-in | nist.gov/itl/smallbusinesscyber |
| CIS Controls | Prioritized security controls with implementation groups by company size | Structured approach, IG1 is perfect for small companies | cisecurity.org/controls |
| CISA Incident Response Playbooks | Federal playbooks for ransomware, vulnerability response | When you need detailed step-by-step IR guides | cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and_Vulnerability_Response_Playbooks_508C.pdf |
Quick-start: minimum viable policies
If you need policies TODAY and have limited time:
1. Acceptable Use Policy (30 minutes)
- Download SANS Acceptable Use Policy template
- Replace company name and contact info
- Remove sections that don't apply
- Add your specific tools and services
- Done — iterate later
2. Incident Response Plan (1 hour)
- Study PagerDuty's response.pagerduty.com structure
- Copy their severity definitions
- Add your team's contact info
- Create basic Slack channel naming convention
- You can add playbooks later
3. Data Classification (20 minutes)
- Use 4-level scheme: Public / Internal / Confidential / Restricted
- List 3-5 examples for each level from your actual data
- Define where each level can be stored
- That's it — expand as needed
A 2-page policy you actually use beats a 20-page policy nobody reads. Ship v1.0 fast, then improve based on real questions and incidents.
Real-world incidents: what happens without policies
These aren't scare tactics — they're lessons from companies that learned the hard way.
No Incident Response Plan
Uber (2016): When attackers accessed data of 57 million users, Uber had no documented IR process. Instead of proper response, they paid the attackers $100,000 to delete the data and stay quiet. Result: $148 million settlement, criminal charges against the CISO, massive reputation damage. A documented IRP would have guided proper breach notification.
Equifax (2017): The breach affecting 147 million people was made worse by chaotic response. No clear roles, delayed patching process, no communication plan. The CEO testified to Congress that they didn't know who was responsible for patching. Total cost exceeded $1.4 billion.
No Acceptable Use Policy
Twitter (2020): Attackers used phone-based social engineering on employees to gain access to internal tools, then hijacked high-profile accounts (Obama, Elon Musk, Apple). A clear AUP about credential handling and social engineering reporting might have helped employees recognize and report the attack earlier.
SolarWinds (2020): An intern reportedly used the password "solarwinds123" on a critical server. Without enforced password policies and clear acceptable use guidelines, basic security hygiene failed.
No Data Classification
Capital One (2019): A misconfigured WAF exposed 100 million customer records. Part of the problem: no clear data classification meant sensitive data was stored alongside less critical data, increasing exposure. Cost: $80 million fine, $190 million settlement.
The pattern: companies without documented policies make worse decisions under pressure, take longer to respond, and face harsher regulatory consequences.
Acceptable Use Policy (AUP)
The AUP defines what employees can and can't do with company resources: computers, network, email, cloud services, and data. It's the foundation of all security policies.
What to cover
- Purpose and scope — who does this apply to, what resources does it cover
- General principles — core expectations: professionalism, no illegal activity
- Account and password requirements — password rules, MFA, no sharing credentials
- Email and communication — professional use, phishing awareness, personal email limits
- Internet and social media — what's allowed during work, representing the company online
- Software and downloads — what can be installed, approval process
- Personal devices (BYOD) — can personal devices access company data, under what rules
- Data handling — how to treat company and customer data
- Remote work — VPN requirements, public WiFi, physical security at home
- Monitoring and privacy — what the company monitors, employee privacy expectations
- Reporting violations — how to report concerns, protection against retaliation
- Consequences — what happens when policy is violated
AUP template
Here's a ready-to-use template. Customize the bracketed sections for your company:
# [COMPANY NAME] Acceptable Use Policy
**Effective date:** [DATE]
**Last updated:** [DATE]
**Owner:** [SECURITY CHAMPION NAME]
## Purpose
This policy defines acceptable use of [COMPANY NAME] technology resources.
Following these guidelines protects you, your colleagues, and our customers.
## Scope
This policy applies to:
- All employees, contractors, and temporary workers
- All company-owned devices and accounts
- Personal devices when accessing company data
- All use of company network and cloud services
## Your responsibilities
### Accounts and passwords
- Use unique passwords of at least [12/14/16] characters for all work accounts
- Enable MFA on all accounts that support it (mandatory for [list critical services])
- Never share your credentials with anyone — including IT or your manager
- Lock your computer when stepping away (Windows+L or Cmd+Ctrl+Q)
- Report suspected account compromise immediately to [CONTACT]
### Email and communication
- Use company email for work communication only
- Don't open unexpected attachments — verify with sender first
- Report suspicious emails to [SECURITY EMAIL/CHANNEL]
- Don't use company email to sign up for personal services
- Assume all email could become public in a legal dispute
### Internet use
Personal internet use during breaks is acceptable within reason. Don't:
- Access illegal content
- Download pirated software or media
- Use excessive bandwidth (streaming during work hours)
- Access content that would embarrass the company if discovered
### Software installation
- Approved software: [LIST OR LINK TO APPROVED SOFTWARE LIST]
- To request new software: [PROCESS]
- Never install software from untrusted sources
- Browser extensions must be from official stores only
- Developers: Package managers (npm, pip, brew) are allowed for development
### Personal devices (BYOD)
If you access company data on personal devices:
- Device must have screen lock enabled
- Install security updates within 1 week of release
- Company reserves right to remote wipe company data if device is lost
- Don't store customer data on personal devices
- Use [APPROVED MDM] if accessing email on mobile
### Remote work
- Use company VPN when accessing internal resources
- Don't work on sensitive data in public places (coffee shops, airports)
- Secure your home WiFi with WPA3 or WPA2 and a strong password
- Don't let family members use work devices
### Data handling
- Treat all customer data as confidential
- Don't share company data through personal email or messaging apps
- Use approved file sharing services: [LIST]
- Don't copy company data to personal cloud storage
- See Data Classification Policy for details
## Company monitoring
[COMPANY NAME] monitors company resources to protect security. This includes:
- Email (content and metadata)
- Web browsing history
- File access logs
- Login times and locations
We don't monitor personal devices except for company apps installed on them.
## Reporting concerns
Report policy violations or security concerns to [CONTACT]. Reports can be
anonymous via [MECHANISM]. We don't retaliate against good-faith reports.
## Consequences
| Violation type | First offense | Repeat offense |
|---------------|---------------|----------------|
| Minor (e.g., unapproved software) | Warning + training | Written warning |
| Moderate (e.g., sharing credentials) | Written warning | Suspension possible |
| Severe (e.g., data breach, malware) | Termination possible | Termination |
Severity is determined by [ROLE, e.g., HR in consultation with Security Champion].
## Questions
Contact [SECURITY CHAMPION NAME] at [EMAIL] with any questions about this policy.
---
**Acknowledgment**
I have read and understand the [COMPANY NAME] Acceptable Use Policy.
Name: ____________________
Signature: ________________
Date: ____________________
Common AUP mistakes
| Mistake | Why it's a problem | Fix |
|---|---|---|
| Copying enterprise policies | Too complex, irrelevant to your context | Write from scratch, keep it short |
| No enforcement | Policy becomes suggestion | Define clear consequences, follow through |
| Outdated technology | References fax machines, ignores cloud | Update annually, reflect actual tools |
| Too restrictive | People work around the rules | Balance security with productivity |
| No exceptions process | People ignore policy when it blocks work | Document how to request exceptions |
| Legal jargon | Nobody reads it | Write in plain language |
Incident Response Plan (IRP)
When something goes wrong — ransomware, data breach, compromised account — you need a plan. The IRP tells everyone exactly what to do. It's not strategy; it's a checklist for a crisis.
Why you need an IRP before an incident
During an incident, you'll be stressed, sleep-deprived, and making fast decisions. That's the worst time to figure out:
- Who should be doing what?
- Who do we call externally?
- What's the communication to customers?
- Are we preserving evidence correctly?
Write the IRP when you're calm. Follow it when you're panicking.
IRP structure
- Incident classification — what counts as an incident, severity levels
- Response team and roles — who does what during an incident
- Detection and reporting — how do we find out about incidents
- Containment — stop the bleeding, isolate affected systems
- Investigation — understand what happened
- Eradication and recovery — remove the threat, restore normal operations
- Communication — internal, customer, regulatory, public
- Post-incident review — learn and improve
- Contact list — everyone you might need to call
IRP template
# [COMPANY NAME] Incident Response Plan
**Version:** 1.0
**Effective date:** [DATE]
**Owner:** [SECURITY CHAMPION NAME]
**Review frequency:** Annually or after any major incident
---
## 1. What is an incident?
A security incident is any event that:
- Compromises confidentiality, integrity, or availability of company data
- Violates security policies
- Could harm customers, employees, or the company
### Severity levels
| Level | Description | Examples | Response time |
|-------|-------------|----------|---------------|
| **Critical** | Active attack, major data breach | Ransomware, customer DB exposed | Immediate (within 15 min) |
| **High** | Confirmed compromise, limited scope | Single account compromised, malware on one system | Within 1 hour |
| **Medium** | Potential compromise, needs investigation | Suspicious login, phishing clicked | Within 4 hours |
| **Low** | Policy violation, no confirmed impact | Unauthorized software, failed attack | Within 24 hours |
---
## 2. Response team
### Primary responders
| Role | Person | Phone | Backup |
|------|--------|-------|--------|
| **Incident Lead** | [NAME] | [PHONE] | [BACKUP NAME] |
| **Technical Lead** | [NAME] | [PHONE] | [BACKUP NAME] |
| **Communications** | [NAME] | [PHONE] | [BACKUP NAME] |
| **Executive Sponsor** | [NAME] | [PHONE] | [BACKUP NAME] |
### Role responsibilities
**Incident Lead (usually Security Champion):**
- Coordinates response activities
- Makes containment decisions
- Maintains incident timeline
- Ensures documentation
- Leads post-incident review
**Technical Lead (usually senior developer or DevOps):**
- Performs technical investigation
- Implements containment measures
- Leads eradication and recovery
- Preserves evidence
**Communications (usually CEO or Head of Customer Success):**
- Drafts customer communications
- Handles media inquiries
- Coordinates with legal
- Internal employee updates
**Executive Sponsor (CEO or CTO):**
- Authorizes major decisions (system shutdown, customer notification)
- External stakeholder communication
- Resource allocation
---
## 3. Detection and reporting
### How we detect incidents
- Employee reports suspicious activity
- Automated alerts (monitoring, SIEM, security tools)
- Customer complaints
- Third-party notification (vendor, researcher)
- Law enforcement contact
### Reporting an incident
**Any employee who suspects an incident must:**
1. Don't try to fix it yourself
2. Report immediately via:
- Slack: #security-incidents (preferred for speed)
- Email: security@[company].com
- Phone: [SECURITY CHAMPION PHONE] (after hours)
3. Include:
- What you observed
- When it happened
- Systems/accounts involved
- Any actions you've already taken
---
## 4. Initial response (first 30 minutes)
### Incident Lead actions
1. ☐ Acknowledge report, assign severity level
2. ☐ Create incident channel: #incident-[DATE]-[BRIEF-NAME]
3. ☐ Alert response team members based on severity
4. ☐ Start incident log (use template below)
5. ☐ Assess: Is containment needed immediately?
### Incident log template
Incident Log: [INCIDENT NAME]
Reported: [DATE TIME] Reported by: [NAME] Severity: [LEVEL] Status: [INVESTIGATING / CONTAINED / RESOLVED]
Timeline
| Time | Action | Person | Notes |
|---|---|---|---|
| 14:32 | Incident reported | Jane | Suspicious login from Russia |
| 14:35 | Response team alerted | Security Champion | Via Slack |
| 14:42 | Account suspended | DevOps | Revoked all sessions |
| ... | ... | ... | ... |
Affected systems
- [LIST]
Evidence preserved
- [LIST]
Key decisions
- [DATE TIME] Decided to [DECISION] because [REASON]. Decided by [WHO].
---
## 5. Containment
**Goal:** Stop the incident from getting worse. Speed matters more than perfection.
### Containment options (choose based on situation)
| Action | When to use | How |
|--------|-------------|-----|
| **Disable account** | Compromised credentials | Admin console → Suspend user |
| **Revoke sessions** | Active unauthorized access | Force logout all sessions |
| **Isolate system** | Malware, active attacker | Disconnect from network |
| **Block IP/domain** | Known attacker infrastructure | Firewall/DNS block |
| **Disable service** | Vulnerable application | Take offline temporarily |
| **Rotate credentials** | Exposed secrets | Generate new, update everywhere |
| **Revoke API keys** | Key compromised | Regenerate in service console |
### Evidence preservation
Before destroying evidence (wiping systems, deleting accounts):
1. Take screenshots of relevant logs and screens
2. Export audit logs from affected services
3. Copy access logs from web servers
4. Note IP addresses, timestamps, user agents
5. For malware: image the disk before wiping
**Store evidence in:** [SECURE LOCATION, e.g., locked folder, separate account]
---
## 6. Investigation
### Questions to answer
- What happened? (Attack vector, timeline)
- How did they get in? (Credentials, vulnerability, social engineering)
- What did they access? (Data, systems)
- What did they take or change?
- Are they still in? (Persistence mechanisms)
- Who else might be affected?
### Investigation sources
| Source | What to check | How to access |
|--------|--------------|---------------|
| **Cloud audit logs** | Login attempts, API calls | AWS CloudTrail, GCP Audit Logs, Azure Activity |
| **Email logs** | Phishing, data exfiltration | Google Admin, Microsoft 365 Security |
| **Application logs** | Unauthorized actions | Your logging system |
| **Endpoint logs** | Malware, lateral movement | EDR if you have it, Windows Event Log |
| **Network logs** | Connections, data transfer | Firewall, VPN logs |
| **Git history** | Code tampering | git log, GitHub audit log |
### When to call for help
Call external help (incident response firm) if:
- You're in over your head technically
- Significant customer data is exposed
- Law enforcement may need to be involved
- Insurance requires professional investigation
- Attack is ongoing and you can't contain it
**Incident response contacts:**
- [IR FIRM NAME]: [PHONE] — [NOTES ON ENGAGEMENT]
- Cyber insurance hotline: [PHONE] — Policy #[NUMBER]
---
## 7. Eradication and recovery
### Eradication checklist
- ☐ Remove attacker access (all credentials rotated, backdoors removed)
- ☐ Patch vulnerability that allowed entry
- ☐ Remove malware from all affected systems
- ☐ Verify no persistence mechanisms remain
- ☐ Scan other systems for indicators of compromise
### Recovery steps
1. ☐ Restore systems from known-good backups if needed
2. ☐ Verify integrity of restored data
3. ☐ Re-enable disabled services gradually
4. ☐ Monitor closely for re-infection
5. ☐ Confirm normal operations
### Recovery validation
Before declaring "all clear":
- Run security scans on affected systems
- Review logs for 24-48 hours post-recovery
- Verify no unauthorized changes to critical files
- Test affected functionality
---
## 8. Communication
### Internal communication
| Audience | When | Who communicates | Channel |
|----------|------|-----------------|---------|
| Response team | Immediately | Incident Lead | Incident Slack channel |
| Leadership | Critical/High: immediately; others: daily | Incident Lead | Email/call |
| All employees | When contained or if they need to act | Communications Lead | All-hands email/Slack |
### Customer communication
**Decision matrix:**
| Situation | Notification required? | Timeline |
|-----------|----------------------|----------|
| Customer data accessed | Yes | Within [72 hours for GDPR, varies by law] |
| Service outage only | Consider, especially if prolonged | As appropriate |
| Internal incident, no customer impact | No | N/A |
**Customer notification template:**
Subject: Security Notification from [COMPANY]
Dear [CUSTOMER],
We're writing to inform you about a security incident that may affect your data.
What happened: [Brief, factual description]
What data was involved: [Specific data types, be precise]
What we're doing: [Actions taken and ongoing]
What you can do: [Specific recommendations: change passwords, monitor accounts, etc.]
How to contact us: [Dedicated email/phone for incident inquiries]
We take the security of your data seriously and apologize for any concern this causes. We'll provide updates as we learn more.
[SIGNATURE]
### Regulatory notification
**Check requirements for:**
- GDPR (72 hours to supervisory authority)
- State breach notification laws (varies by US state)
- Industry regulations (HIPAA, PCI DSS)
- Contractual obligations to customers
**Keep legal counsel informed** for anything involving customer data.
---
## 9. Post-incident review
### Blameless postmortem
Within 1 week of resolution, conduct a review:
**Attendees:** All responders, relevant stakeholders
**Agenda:**
1. Timeline review — what happened, when?
2. What went well?
3. What could have gone better?
4. Action items to prevent recurrence
5. Action items to improve response
### Postmortem template
```markdown
## Incident Postmortem: [INCIDENT NAME]
**Date of incident:** [DATE]
**Date of review:** [DATE]
**Severity:** [LEVEL]
**Duration:** [TIME FROM DETECTION TO RESOLUTION]
### Summary
[2-3 sentence description]
### Timeline
[From incident log]
### Root cause
[What allowed this to happen?]
### Impact
- Systems affected: [LIST]
- Data affected: [TYPE AND VOLUME]
- Customer impact: [DESCRIPTION]
- Business impact: [COST, DOWNTIME, REPUTATION]
### What went well
- [EXAMPLE]
- [EXAMPLE]
### What could improve
- [EXAMPLE]
- [EXAMPLE]
### Action items
| Action | Owner | Due date | Status |
|--------|-------|----------|--------|
| [ACTION] | [NAME] | [DATE] | ☐ |
| [ACTION] | [NAME] | [DATE] | ☐ |
### Lessons learned
[Key takeaways for the team]
10. Contact list
Internal
| Role | Name | Phone | |
|---|---|---|---|
| Security Champion | [NAME] | [PHONE] | [EMAIL] |
| CTO | [NAME] | [PHONE] | [EMAIL] |
| CEO | [NAME] | [PHONE] | [EMAIL] |
| Legal Counsel | [NAME] | [PHONE] | [EMAIL] |
| HR | [NAME] | [PHONE] | [EMAIL] |
External
| Service | Contact | Phone/URL | Account info |
|---|---|---|---|
| Cyber Insurance | [CARRIER] | [PHONE] | Policy #[NUMBER] |
| IR Firm | [FIRM] | [PHONE] | [RETAINER DETAILS] |
| Legal (external) | [FIRM] | [PHONE] | |
| FBI | Local field office | [PHONE] | For major crimes |
| AWS Support | [SUPPORT URL] | [ACCOUNT ID] | |
| Google Workspace | [SUPPORT URL] | ||
| [OTHER CRITICAL VENDORS] |
Appendix: Quick reference cards
If you suspect your account is compromised
- Change your password immediately
- Enable MFA if not already enabled
- Report to #security-incidents
- Check recent account activity for unauthorized access
- Don't delete anything — we may need it for investigation
If you clicked a phishing link
- Disconnect from network (turn off WiFi/unplug ethernet)
- Report to #security-incidents immediately
- Don't log into anything else
- Wait for instructions from response team
If you see ransomware
- DON'T turn off the computer (may destroy evidence)
- Disconnect from network immediately
- Take photo of ransom message
- Report to #security-incidents
- Alert people nearby to disconnect too
## Incident playbooks for common scenarios
The IRP gives you the general process. Playbooks give you specific steps for specific incident types. Start with these four — they cover 80% of incidents small companies face.
### Playbook: Compromised employee account
**Severity:** Usually High (could be Critical if admin account)
**Indicators:**
- Impossible travel (login from two countries within hours)
- Login from unusual location or device
- Unusual activity in audit logs
- Employee reports not recognizing their own actions
- Password reset they didn't request
**Immediate actions (first 15 minutes):**
| Step | Action | Who |
|------|--------|-----|
| 1 | Suspend the account (don't delete) | IT/Security Champion |
| 2 | Revoke all active sessions | IT |
| 3 | Disable any API tokens/keys for that user | DevOps |
| 4 | Contact the employee (phone, not company email) | Incident Lead |
| 5 | Check if MFA was enabled | Security Champion |
**Investigation checklist:**
- [ ] When did the compromise happen? (First suspicious login)
- [ ] How did they get in? (Phishing? Credential stuffing? Malware?)
- [ ] What did they access? (Email, files, code, admin panels)
- [ ] What did they do? (Read, download, modify, delete)
- [ ] Did they access other accounts from this one?
- [ ] Are other accounts compromised? (Same password reuse)
**Recovery:**
1. Reset password with strong random password
2. Verify MFA is enabled (hardware key if available)
3. Review and revoke unnecessary permissions
4. Clear all browser sessions
5. Scan employee's device for malware
6. Monitor account closely for 30 days
**Communication:**
- Employee: Explain what happened, no blame, offer security training
- Team: General reminder about phishing if relevant
- Customers: Only if their data was accessed
---
### Playbook: Ransomware attack
**Severity:** Critical
**Indicators:**
- Files encrypted with unusual extensions
- Ransom note on desktop or in folders
- Systems unusually slow or unresponsive
- Antivirus alerts about encryption
- Multiple employees reporting issues simultaneously
**Immediate actions (first 5 minutes):**
| Step | Action | Who | Notes |
|------|--------|-----|-------|
| 1 | DON'T turn off affected computers | Everyone | Preserves memory evidence |
| 2 | Disconnect from network | Everyone affected | Unplug cable, disable WiFi |
| 3 | Alert all employees to disconnect | Incident Lead | Slack/SMS/phone tree |
| 4 | Call cyber insurance hotline | Executive Sponsor | They'll provide IR firm |
| 5 | Document everything (photos, screenshots) | Technical Lead | Timeline is crucial |
**Do NOT:**
- Pay the ransom (without legal/insurance consultation)
- Communicate with attackers from company email
- Delete any files or logs
- Restore from backup until you know the infection vector
- Connect backup drives to infected network
**Investigation priorities:**
1. Identify patient zero (first infected system)
2. Determine infection vector (phishing, RDP, vulnerability)
3. Scope the damage (what's encrypted, what's exfiltrated)
4. Check if backups are intact and clean
5. Identify what data might have been stolen (double extortion)
**Recovery (only after investigation):**
1. Rebuild systems from known-good images/backups
2. Patch the vulnerability that allowed entry
3. Reset ALL credentials (assume everything compromised)
4. Restore data from clean backups
5. Monitor heavily for re-infection
**External contacts:**
- FBI: ic3.gov for reporting
- CISA: cisa.gov/ransomware
- Your cyber insurance IR hotline
---
### Playbook: Customer data breach
**Severity:** Critical (regulatory and legal implications)
**Indicators:**
- Unauthorized database access in logs
- Customer data found externally (researcher, dark web, paste site)
- Unusual data exports or API activity
- Third-party notification
**Immediate actions:**
| Step | Action | Who | Timeline |
|------|--------|-----|----------|
| 1 | Confirm the breach is real | Technical Lead | First hour |
| 2 | Stop ongoing access | IT | Immediately |
| 3 | Alert legal counsel | Executive Sponsor | Within 2 hours |
| 4 | Preserve all evidence | Technical Lead | Ongoing |
| 5 | Begin regulatory clock (GDPR: 72 hours) | Legal | Document discovery time |
**Scoping questions:**
- What data types were exposed? (PII, financial, health)
- How many records/individuals affected?
- Which jurisdictions? (EU residents = GDPR)
- How long was data accessible?
- Was data exfiltrated or just accessed?
- Is the vulnerability closed?
**Notification requirements:**
| Regulation | Timeline | Who to notify |
|------------|----------|---------------|
| **GDPR** | 72 hours | Supervisory authority, then individuals |
| **CCPA** | "Expedient" | California residents |
| **HIPAA** | 60 days | HHS, individuals, media if >500 people |
| **PCI DSS** | Immediately | Card brands, acquiring bank |
| **State laws** | Varies | Check each state where customers reside |
**Customer notification elements:**
1. What happened (factual, brief)
2. What data was involved (specific)
3. What you're doing about it
4. What they should do (change passwords, monitor credit)
5. How to contact you with questions
6. Apology
**Legal protection:**
- Document everything with timestamps
- Don't speculate in written communications
- Get legal review before external statements
- Preserve evidence for potential litigation
---
### Playbook: Leaked secrets in Git
**Severity:** High (could be Critical if production credentials)
**Indicators:**
- Alert from GitHub Secret Scanning
- Alert from git-secrets, truffleHog, or similar
- Notification from cloud provider (AWS, GCP auto-detect some keys)
- Security researcher report
**Immediate actions (within 15 minutes):**
| Step | Action | Who |
|------|--------|-----|
| 1 | Determine what was exposed | Security Champion |
| 2 | Rotate the credential IMMEDIATELY | DevOps/Developer |
| 3 | Check if credential was used maliciously | DevOps |
| 4 | Remove from Git history if possible | Developer |
| 5 | Check if repo is public | Security Champion |
**Credential rotation by type:**
| Secret type | Where to rotate | Additional steps |
|-------------|-----------------|------------------|
| **AWS keys** | IAM Console | Check CloudTrail for unauthorized use |
| **GCP service account** | GCP Console | Check audit logs |
| **Database password** | Database admin | Check access logs |
| **API key (third-party)** | Provider's console | Check usage logs |
| **JWT secret** | Application config | All existing tokens become invalid |
| **SSH key** | Remove from authorized_keys | Generate new key pair |
**Git history cleanup:**
```bash
# If the commit is only local
git reset --soft HEAD~1
# Remove the secret, recommit
# If already pushed (and repo is private)
# Use BFG Repo-Cleaner
bfg --delete-files 'filename-with-secret'
git push --force
# If repo is public
# Assume the secret is compromised forever
# Focus on rotation, not history cleanup
Prevention:
- Install pre-commit hooks (git-secrets, detect-secrets)
- Enable GitHub Secret Scanning
- Use environment variables or secret managers
- Never commit .env files
Investigation:
- When was the secret committed?
- When was the repo made public (if applicable)?
- Who had access during that window?
- Any unusual activity using that credential?
Data Classification Policy
Data classification tells employees how to handle different types of information. Without it, people either treat everything as top secret (inefficient) or nothing as sensitive (dangerous).
Why classify data?
- Focus protection efforts — Not all data needs the same security
- Clear handling rules — People know what they can and can't do
- Compliance support — Regulations often require classification
- Incident response — Know immediately how serious a breach is
Simple classification scheme
For small companies, three or four levels is enough:
| Level | Description | Examples |
|---|---|---|
| Public | Can be shared with anyone | Marketing materials, public blog posts, open source code |
| Internal | For employees only | Internal wiki, org chart, general policies |
| Confidential | Business-sensitive | Financial data, strategic plans, employee records, customer lists |
| Restricted | Highest sensitivity | Customer PII, credentials, encryption keys, health/payment data |
Data Classification Policy template
# [COMPANY NAME] Data Classification Policy
**Effective date:** [DATE]
**Owner:** [SECURITY CHAMPION]
## Purpose
This policy ensures we protect data appropriately based on its sensitivity.
Not all data is equal — we apply stronger controls to more sensitive data.
## Classification levels
### Public
**Definition:** Information that can be freely shared outside the company.
**Examples:**
- Published blog posts and marketing content
- Public website content
- Open source code repositories
- Press releases
**Handling rules:**
- No special handling required
- Can be posted publicly
- Can be emailed to anyone
---
### Internal
**Definition:** Information for internal use that isn't sensitive but shouldn't
be public.
**Examples:**
- Internal documentation and wiki
- Company org chart
- Meeting notes (non-sensitive topics)
- Internal announcements
- Non-sensitive Slack conversations
**Handling rules:**
- Share only with employees and approved contractors
- Don't post publicly
- Use company email and approved tools for sharing
- No special encryption required
---
### Confidential
**Definition:** Sensitive business information that could harm the company
if disclosed.
**Examples:**
- Financial reports and projections
- Business strategies and roadmaps
- Employee compensation data
- Customer lists and contracts
- Source code (proprietary)
- Vendor contracts and pricing
- Internal security reports
**Handling rules:**
- Share only on need-to-know basis
- Use company-approved tools only (no personal email)
- Don't store on personal devices
- Password-protect if sending externally
- Don't discuss in public places
---
### Restricted
**Definition:** Highly sensitive data requiring maximum protection. Unauthorized
disclosure could cause severe harm.
**Examples:**
- Customer personally identifiable information (PII)
- Payment card data
- Health information
- Passwords and encryption keys
- Authentication tokens and API secrets
- Security vulnerabilities (before patched)
- Legal matters and investigations
**Handling rules:**
- Access strictly limited to those who need it
- Encrypt at rest and in transit
- Never in email attachments (use secure sharing)
- Never on personal devices
- Log all access where possible
- Report any suspected exposure immediately
- Special handling for disposal (secure delete)
---
## Classifying new data
When creating or receiving new data:
1. **Consider the impact of exposure:**
- Could it harm customers? → Restricted or Confidential
- Could it harm the business? → Confidential
- Is it just internal? → Internal
- Can anyone see it? → Public
2. **When in doubt, classify higher** and ask Security Champion
3. **Mark classified documents** when practical:
- Documents: Add classification to header/footer
- Files: Include classification in filename (e.g., "Q4-financials-CONFIDENTIAL.xlsx")
- Emails: Add classification to subject when sending Confidential/Restricted
---
## Handling by channel
| Channel | Public | Internal | Confidential | Restricted |
|---------|--------|----------|--------------|------------|
| **Company email** | ✓ | ✓ | ✓ (internal only) | ✗ (use secure share) |
| **Personal email** | ✓ | ✗ | ✗ | ✗ |
| **Company Slack** | ✓ | ✓ | ✓ (private channels) | ✗ |
| **Google Drive (company)** | ✓ | ✓ | ✓ (restricted sharing) | ✓ (encrypted, limited) |
| **Personal cloud storage** | ✓ | ✗ | ✗ | ✗ |
| **Printed documents** | ✓ | ✓ | Secure disposal | Secure disposal, limited printing |
| **External sharing** | ✓ | ✗ | Approved recipients, encrypted | Case-by-case approval |
---
## Data retention and disposal
| Classification | Retention | Disposal method |
|---------------|-----------|-----------------|
| Public | No limit | Normal deletion |
| Internal | [X years] or as needed | Normal deletion |
| Confidential | [X years] per legal/business need | Secure delete, shred paper |
| Restricted | Minimum necessary, per regulation | Secure delete, verified shred |
---
## Questions
Contact [SECURITY CHAMPION] at [EMAIL] with questions about classification
or handling specific data.
Where to store and manage policies
Policies are useless if nobody can find them. Choose a storage approach that fits your company culture.
Storage options
| Approach | Pros | Cons | Best for |
|---|---|---|---|
| Git repository | Version control, PR review for changes, developer-friendly, audit trail | Non-technical staff can't easily edit | Engineering-heavy teams |
| Notion | Easy editing, good search, looks professional | Version history limited, export is clunky | Most small companies |
| Confluence | Integrates with Jira, good permissions | Can get messy, slower | Companies already using Atlassian |
| Google Docs | Everyone knows it, easy sharing | Gets disorganized fast, poor search | Quick start, very small teams |
| SharePoint | Microsoft ecosystem, good permissions | Complex, often hated | Microsoft shops |
| Dedicated GRC tool (Vanta, Drata, Secureframe) | Compliance-ready, templates, evidence collection | $$$, overkill for basics | SOC 2/ISO 27001 prep |
Git-based policy management
For tech companies, storing policies in Git has real advantages:
policies/
├── README.md # Index of all policies
├── acceptable-use-policy.md
├── incident-response-plan.md
├── data-classification.md
├── playbooks/
│ ├── ransomware.md
│ ├── compromised-account.md
│ └── data-breach.md
└── templates/
├── incident-log.md
└── postmortem.md
Benefits:
- Changes require PR review (audit trail)
- Diffs show exactly what changed
- Employees already know Git
- Easy to link from code repositories
- Render nicely on GitHub/GitLab
Tooling:
- Use Markdown for policies
- Render with MkDocs, Docusaurus, or GitBook
- Set up CODEOWNERS to require Security Champion approval
- Link from employee handbook and onboarding docs
Making policies findable
Wherever you store policies:
- Single source of truth — One location, not scattered across drives
- Link from onboarding — Every new hire should know where policies live
- Bookmark in Slack — Pin the policy folder in relevant channels
- Search works — Test that people can find policies by searching keywords
- Keep URL stable — Don't move policies around; bookmarks break
Policy versioning
Track what changed and when:
## Version History
| Version | Date | Changes | Author |
|---------|------|---------|--------|
| 1.0 | 2024-01-15 | Initial release | [Name] |
| 1.1 | 2024-03-22 | Added remote work section | [Name] |
| 2.0 | 2024-06-01 | Major revision for SOC 2 | [Name] |
If using Git, your commit history is your version history.
Adapting policies to your company
Templates are starting points, not final products. Here's how to customize:
Technology alignment
Replace generic tool references with your actual stack:
| Generic term | Your company's tool |
|---|---|
| "Company email" | Gmail, Outlook 365, etc. |
| "Approved file sharing" | Google Drive, Dropbox Business, etc. |
| "Company chat" | Slack, Teams, etc. |
| "VPN" | Your specific VPN solution |
| "Password manager" | Passwork, or your chosen solution |
Culture fit
Adjust tone and strictness:
- Startup with casual culture: More conversational, fewer formal sections
- B2B with enterprise customers: More formal, detailed compliance sections
- Remote-first: More emphasis on remote work security
- Developer-heavy: Technical audience, can use technical terms
Regulatory requirements
Add sections based on your compliance needs:
- GDPR: Data subject rights, breach notification timelines
- HIPAA: PHI handling, business associate requirements
- PCI DSS: Cardholder data protection
- SOC 2: Control documentation
Common customizations
| Situation | Customization |
|---|---|
| Fully remote company | Expand remote work section, home security requirements |
| BYOD allowed | Detailed personal device requirements |
| Handling payment data | PCI DSS specific requirements |
| International team | Multi-jurisdiction considerations |
| Highly regulated industry | Specific compliance language |
Rolling out policies
Writing policies is half the work. Getting people to follow them is the other half.
Rollout checklist
- Get leadership sign-off — Policies mean nothing without executive backing
- Announce with context — Explain why, not just what
- Make policies findable — Central location, not buried in email
- Require acknowledgment — Signature or checkbox confirmation
- Train on key points — Don't expect people to read every word
- Answer questions — Office hours or Q&A channel
- Enforce consistently — First violations matter
Announcement template
Subject: New Security Policies — Action Required by [DATE]
Team,
We're rolling out three security policies that define how we protect company
and customer data. These aren't bureaucracy for bureaucracy's sake — they're
our answer to "what should I do when..."
**The policies:**
1. Acceptable Use Policy — What's allowed with work computers and data
2. Incident Response Plan — What to do when something goes wrong
3. Data Classification — How to handle different types of information
**What you need to do:**
1. Read the policies: [LINK TO POLICIES]
2. Sign the acknowledgment: [LINK TO FORM]
3. Complete by: [DATE]
**Key takeaways** (if you read nothing else):
- Enable MFA on everything
- Report suspicious stuff to #security-incidents
- Don't put Restricted data (customer PII, passwords) in email
- Lock your computer when you walk away
I'll hold a Q&A session on [DATE] at [TIME] — bring your questions.
Or ask anytime in #security-questions.
[YOUR NAME]
Keeping policies alive
Policies become shelfware unless you actively maintain them:
- Annual review — Update for new tools, changed threats, lessons learned
- Incident triggers — Update IRP after every significant incident
- New hire onboarding — Include policy review in first week
- Refresher training — Annual reminder of key points
- Exception tracking — Review approved exceptions for patterns
Common mistakes
- Copying without customizing — Policies that reference tools you don't use
- Too long — Nobody reads a 30-page AUP
- Too vague — "Use good judgment" isn't actionable
- No enforcement — Rules without consequences become suggestions
- No exception process — People ignore rules they can't follow
- Legal-speak — Employees tune out dense legalese
- Set and forget — Policies get stale fast
- No training — Assuming people will read and understand on their own
- Starting with everything — Better to have 3 solid policies than 10 drafts
- Not involving stakeholders — Policies that don't reflect how work actually happens
Workshop: create your policies
Part 1: Acceptable Use Policy
Time: 2-3 hours
-
Gather input:
- Review your current tools and systems
- Talk to IT/DevOps about current security practices
- Ask HR about existing employee guidelines
- Note any compliance requirements
-
Draft the policy:
- Use the template above as a starting point
- Replace all [BRACKETED] placeholders
- Add company-specific rules
- Remove sections that don't apply
-
Review:
- Have someone outside the team read for clarity
- Check with legal if you have counsel
- Get leadership approval
Part 2: Incident Response Plan
Time: 3-4 hours
-
Define your team:
- Who is Incident Lead? Backup?
- Who handles technical investigation?
- Who handles communication?
- Get phone numbers for after-hours contact
-
Customize the plan:
- Update contact list with your people
- Adjust severity levels to your context
- Add your specific tools and access methods
- Include your external contacts (insurance, legal)
-
Test the basics:
- Does everyone know how to report an incident?
- Can you reach response team members after hours?
- Do you have access to necessary systems?
Part 3: Data Classification
Time: 1-2 hours
-
Inventory your data:
- Customer data: what types, where stored?
- Business data: financial, strategic, HR?
- Technical data: credentials, configs?
-
Classify your major data types:
- Assign each to Public/Internal/Confidential/Restricted
- Note where each type is stored
-
Define handling rules:
- What tools can be used for each level?
- Who can access each level?
- How long to retain?
Artifacts to produce
After this workshop, you should have:
- Completed Acceptable Use Policy (2-3 pages)
- Completed Incident Response Plan (3-5 pages)
- Completed Data Classification Policy (1-2 pages)
- Acknowledgment form for employees
- Rollout announcement draft
- Policy location (wiki page, shared drive folder)
Self-check questions
- What are the three essential policies every company should have?
- Why should policies be short (under 5 pages)?
- What's the difference between Confidential and Restricted data?
- Who should be on an incident response team for a small company?
- When should you notify customers about a security incident?
- Why is a blameless postmortem important?
- How often should policies be reviewed and updated?
- What should you do if an employee refuses to follow the AUP?
- Why include "the why" in policies?
- What's the first step when you suspect an incident?
How to explain this to leadership
The pitch: "We need three core documents: what employees can do with company resources, what to do when something goes wrong, and how to handle different types of data. These protect us legally, help us respond faster to incidents, and are often required by customers and insurers."
The ROI:
- Faster incident response (hours vs. days of confusion)
- Legal protection if employee misbehaves
- Required for most cyber insurance policies
- Asked for by B2B customers during security reviews
- Foundation for any future compliance (SOC 2, etc.)
The ask: "I need 2-3 hours to draft these, leadership review, and a company announcement mandating acknowledgment. Total company time investment: maybe 30 minutes per employee to read and sign."
The risk of not doing it: "Right now, if we have a breach, nobody knows exactly what to do. We'd be making it up as we go at 2 AM. And if an employee does something stupid with company data, we have no documented policy showing they knew better."
Conclusion
Policies nobody reads protect nobody. A one-page acceptable use policy that everyone has signed beats a 40-page compliance document that lives on a shared drive.
Write for the people who will follow the policy, not the auditor who will review it.
What's next
Next: communication and security evangelism — how to talk about security in ways that get people to actually change their behavior.