The state of secrets sprawl in 2026: Key findings from GitGuardian's report

In 2025, GitGuardian detected 28.65 million new hardcoded secrets in public GitHub commits. That is a 34% increase over the previous year and the largest single-year jump the company has ever recorded. That number covers only public repositories. The full picture, once internal systems, collaboration tools, and self-hosted infrastructure are included, is considerably worse.

Three themes run through the data:

  1. AI-assisted development has moved from experiment to default, accelerating credential leakage at every layer of the stack.
  2. Internal systems are far more exposed than most organizations assume: private repositories, Slack channels, and self-hosted GitLab instances all carry significant credential risk.
  3. Remediation remains the industry's critical failure: 64% of secrets confirmed as valid in 2022 were still exploitable in January 2026, four years after they first leaked.

This article unpacks each finding with the data and context IT and security teams need to make the case for change internally.


Key takeaways

  • AI is the dominant driver of credential exposure. Eight of the ten fastest-growing leaked secret types are tied to AI services. LLM infrastructure is leaking 5× faster than core model providers.
  • Internal repositories are 6× more likely to contain a hardcoded secret than public ones. "Private" is not a security control.
  • A quarter of all internal incidents originate outside the codebase. Slack, Jira, and Confluence account for 28% of leaks — with a higher critical severity rate than code-based findings.
  • Remediation is the industry's limiting factor. 64% of secrets confirmed as valid in 2022 were still exploitable in January 2026.
  • Validation-only prioritization misses 46% of critical secrets. Generic credentials — private keys, custom tokens, passwords — cannot be auto-validated but drive half of all critical incidents.
  • Developer workstations and CI/CD runners are an underestimated attack surface. The Shai-Hulud 2 attack found 294,842 secret occurrences across 6,943 compromised machines; 59% were CI/CD runners.
  • Third-party contractors are an uncontrolled secrets vector. GitGuardian found 1,834 critical incidents across 13 consulting firms, potentially affecting 1,203 client organizations.
  • MCP configuration files are a new and largely unmonitored leak surface. In 2025, 24,008 unique secrets were exposed in MCP-related configs on public GitHub — 8.8% confirmed valid at the time of detection.

How big is the secrets sprawl problem in 2025?

How AI is fueling a new generation of leaked secrets

Secrets sprawl is the uncontrolled proliferation of hardcoded credentials (API keys, passwords, tokens, and certificates) across codebases, configuration files, and collaboration tools. Since 2021, leaked secrets on public GitHub have grown 152%, while the developer population grew 98%. The gap is widening every year, and 2025 produced the largest single-year volume increase on record.

Scale by the numbers

Metric 2025 figure Change
Total secrets detected 28.65 million +33.9% YoY
New hardcoded secrets on public GitHub 28.65 million +34% YoY
Active GitHub developers 22.8 million +33.2% YoY
Repositories with secrets 4,012,054 +39.9% YoY
Public commits 1.94 billion +42.7% YoY
Pro Bono alert emails sent 2.5 million +47% YoY
Secrets per repository ~0.32 Stable

The scale of the problem is structural. More people are writing more code, integrating more third-party services, and generating more credentials that can leak.

One metric held steady: secrets per repository. Density stayed roughly flat, which suggests GitHub's Push Protection is doing its job of catching common credentials before they go public. But density control cannot stop volume growth. When the total number of commits quadruples, even a stable leak rate per repository produces a record number of exposed credentials.

Secrets detected on public GitHub, 2021–2025

0 10 20 30 Millions of secrets 11.0 14.0 17.0 21.4 28.65 +27% +21% +26% +34% 2021 2022 2023 2024 2025
Key insight: Secrets detected on public GitHub reached 28.65 million in 2025, representing a +34% year-over-year increase and signaling accelerating credential exposure risks in public repositories.
Source: GitGuardian, State of Secrets Sprawl 2026
About the data source: GitGuardian continuously scans public GitHub commits using its proprietary secrets detection engine. This report is based on analysis of all public repositories throughout 2025, supplemented by data from enterprise deployments and analysis of compromised machines during the Shai-Hulud attack.

How AI is fueling a new generation of leaked secrets

AI-assisted development has reshaped which secrets leak, how fast they accumulate, and where they end up. Eight of the ten fastest-growing types of leaked secrets year-over-year are tied to AI services. The AI infrastructure boom is the dominant driver of credential exposure right now.

The AI infrastructure boom

In 2025, GitGuardian detected 1,275,105 secrets belonging to AI services — an 81% increase over 2024.

Fastest-growing specific detectors (AI-related):

Service YoY growth Category
Brave Search +1,255% Retrieval API
Firecrawl +796% Retrieval API
Perplexity +657% Retrieval API
Supabase +992% Backend / data layer
Jina +334% Embeddings / search
LangChain +108% Orchestration
Weights & Biases +114% Experiment tracking
OpenRouter +4,800% (48×) Model gateway
DeepSeek +2,300% (23×) Model provider

The more significant trend is what is leaking beyond the model providers themselves. LLM infrastructure (the orchestration, retrieval, and storage layer that surrounds core models) is leaking 5× faster than the model providers. Supabase alone now ranks in the top 20 most-leaked secrets overall, with over 248,600 occurrences.

The pattern is consistent: developers building AI-powered applications connect a model to a retrieval layer, an orchestration tool, a vector database, an experiment tracker, and a monitoring service. Each integration adds a new credential. Each credential is a potential leak.

As new AI providers emerge, there is an inevitable lag before detection coverage catches up. GitHub Push Protection focuses on known patterns. Novel providers slip through. By the time a detector is built, thousands of keys may already be public.

Real incident: In April 2026, CloudSEK analyzed 10,000 Android apps and found 32 active Google API keys across 22 applications — collectively installed over 500 million times. The keys were originally embedded for public-facing services like Maps and Firebase, but Google's silent expansion of the Gemini API meant those same keys now granted access to AI endpoints. One developer reported $15,400 in unauthorized charges within hours of key exposure. Another lost $128,000 despite having security controls in place (Infosecurity Magazine, April 2026).

Claude Code and AI-assisted commits leak secrets at 2× the baseline

Anthropic's Claude Code went from 22 co-authored commits in January 2025 to 2.16 million in December. Across the full year:

  • Claude Code-assisted commits = 0.4% of everything scanned publicly
  • Claude Code-assisted commits = 0.9% of all leaks
  • Leak rate: 3.2% vs. 1.5% baseline across all public GitHub commits

Claude Code leak rate over 2025:

Period Secrets per 1,000 commits vs. human baseline
January 2025 ~13 ~1×
August 2025 (peak) 31 ~2.4×
December 2025 ~13 ~1×

Claude Code commits were also consistently larger — approximately 2× the lines of code per commit from April onward. Larger commits mean more surface area for credential exposure in a single review.

The important nuance: the developer remains in control of every commit. AI coding assistants are tools. The elevated leak rate reflects human decisions — oversight, time pressure, or deliberate choices to bypass warnings — not autonomous AI behavior.

The takeaway for security teams: treat AI-generated change sets as higher-impact review units, maintain automated scanning in the developer workflow, and keep remediation fast enough that a leaked secret does not remain valid long enough to be exploited.

24,000 secrets in MCP configuration files

What is MCP? Model Context Protocol is the standard that emerged in early 2025 for connecting large language models to external tools and data sources. When a developer wants their AI agent to query a database, search the web, or interact with a SaaS platform, MCP handles the connection — and those connections require credentials.

Key findings:

  • 24,008 unique secrets exposed in MCP-related configuration files on public GitHub in 2025
  • 2,117 confirmed valid (8.8%) at the time of detection

Top 5 valid secret types in MCP configs:

Secret type Share of valid findings
Google API Key 19%
PostgreSQL connection string 14%
Firecrawl 12%
Perplexity 11%
Brave Search 11%

Why MCP configs keep leaking: Official MCP setup guides normalize hardcoding. Popular quickstart documentation shows API keys passed as command-line arguments inside server config files, or stored inline in JSON files that get committed to version control. When official documentation treats hardcoding as a default, sprawl follows.

The Smithery.ai case: GitGuardian's research team disclosed a critical vulnerability in one of the most widely used MCP server registries. A single path traversal bug in the platform's Docker build process exposed an overprivileged token that granted arbitrary code execution across all 3,000+ hosted MCP servers — and access to the API keys and secrets of thousands of customers across hundreds of services.

MCP credential management — minimum standards:

  • Never store secrets in MCP config files. Use environment variables managed by a dedicated secrets manager, not inline values in JSON or CLI arguments.
  • Clients, not servers, should own the secrets. MCP servers should request credentials from clients at query time rather than embedding them in server-side configuration.
  • Exclude MCP configuration directories from version control via .gitignore.
  • Only connect to remote MCP servers over TLS.
  • Scan before pushing. Pre-commit scanning tools detect secrets in MCP config files before they reach version control.
  • Require manual approval before any MCP action touching production systems, databases, or deployment pipelines.
CTA Image

Secrets sitting in code, configs, and chat messages are a breach waiting to happen. Passwork gives your team a single, secure place to store, share, and rotate credentials — so they never end up hardcoded in a repository or pasted into a Slack channel. Explore Passwork's secrets management capabilities


Internal systems are a dangerous blind spot

Internal systems are a dangerous blind spot

The most consequential finding in the 2026 report is one that receives the least press coverage: the danger is greatest where organizations feel safest. Internal repositories, collaboration tools, and self-hosted infrastructure are treated as secure by default — but the data says otherwise.

Internal repositories leak 6× more than public ones

Repository type Share containing at least one hardcoded secret
Public repositories 5.6%
Internal repositories 32.2%
Ratio 6× more likely

The reason is the "security through obscurity" antipattern. Development teams tend to be less cautious within a closed perimeter. They assume that exposing a secret in a private repository is less harmful because it is not subject to public scrutiny. The result is a silent buildup of hardcoded credentials scheduled to be removed "later" — and rarely are.

Internal repositories also hold the most valuable credentials:

  • CI/CD tokens
  • Cloud access keys
  • Database credentials
  • Internal tooling tokens

These are exactly the assets an attacker wants once they establish a foothold. A single exposed secret in a private repo can become a fast path to lateral movement across the entire infrastructure.

Industry exposure rates (public repositories):

Industry Repos with at least one secret
Oil & Natural Energy 7.2%
Aviation 7.0%
Retail & Hospitality 5.8%
Healthcare 4.4%

These figures represent only what is visible externally. Internal exposure is 6× higher across the board.

Consulting firms turn secrets sprawl into third-party risk

Contractors and consulting firms operate across multiple client environments simultaneously. They hold credentials, tokens, and configuration knowledge for every client they serve — and they often work in personal accounts or repositories outside their client's GitHub organization.

GitGuardian's analysis of 13 consulting firms:

Metric Figure
Critical / highly sensitive incidents 1,834
Average incidents per firm 141
Potentially impacted customer companies 1,203
Share of incidents from top 5 firms 72%

The Red Hat breach (October 2025): The cybercrime group "Crimson Collective" exfiltrated 570 GB of data from 28,000 repositories on Red Hat's internal consulting GitLab instance, affecting approximately 800 organizations worldwide. The leaked data contained:

  • API keys and database credentials
  • Authentication tokens and VPN configurations
  • Infrastructure details and internal architecture

Affected organizations included Bank of America, JPMorgan Chase, IBM, Cisco, the U.S. Navy, and the NSA. The attackers used the harvested credentials to pivot directly into customer infrastructure.

Any organization that uses contractors or consulting firms has a third-party secrets problem, whether or not it has been discovered yet.

Real incident: In April 2026, Vercel confirmed a breach after a threat actor (claiming to be ShinyHunters) posted on a hacking forum that they were selling stolen API keys, npm tokens, GitHub tokens, source code, and access to internal deployments. The initial access came through a compromised third-party AI tool (Context.ai), which gave the attacker a foothold in a Vercel employee's Google Workspace account. From there, the attacker enumerated environment variables that were not marked as "sensitive" — and therefore not encrypted at rest. Vercel's own CEO confirmed the chain: one vendor breach → one employee account → production environment variables (BleepingComputer, April 2026).

One in four internal leaks originate outside the codebase

Where internal incidents originate:

Source Share of incidents Critical severity rate
Source code (SCM only) 68% 43.7%
Collaboration tools (ODS only) 28% 56.7%
Both SCM and ODS 4%

Collaboration tools (Slack, Jira, Confluence) account for 28% of incidents, with a 13 percentage-point higher critical severity rate than code-based leaks. Secrets shared through these tools tend to be production credentials shared during incident response or urgent troubleshooting, when people are moving fast and not thinking about security hygiene.

The 4% overlap between SCM and ODS findings means these are largely separate leak populations. Scanning only repositories misses roughly a quarter of an organization's total exposure.

80,000 secrets on self-hosted GitLab and Docker registries

GitGuardian identified thousands of self-hosted GitLab instances and Docker registries left publicly accessible without authentication.

Findings summary:

Platform Total secrets Valid secrets Validity rate
Self-hosted GitLab 57,000 ~6,800 12%
Docker registries 23,000 ~3,450 15%
Total 80,000 ~10,000

Validity rates by credential type (Docker vs. GitLab):

Credential type Docker GitLab
Cloud credentials 60% valid 47% valid
SCM secrets 40% valid 2% valid
Data storage 32% valid 4% valid

The closer an asset is to production, the higher the likelihood of finding a valid credential. The leak rate from self-hosted GitLab and Docker is 3–4× higher than from public GitHub.

The research also uncovered 300,000+ email addresses (including 2,000 with .gov domains) and references to internal database hosts and non-public infrastructure.

The "Russian dolls" effect: publicly exposed leaks contain valid secrets that grant access to private infrastructure, which in turn exposes more secrets, compounding the initial breach at each layer.


64% of leaked secrets from 2022 are still valid in 2026

64% of leaked secrets from 2022 are still valid in 2026

Detection without remediation is not security — it is documentation. The longitudinal data in the 2026 report makes this case clearly.

Validity of secrets originally leaked in 2022, retested over time:

Retest date Validity rate
2022 (original leak) 100%
January 2025 ~70%
January 2026 64%

Those credentials have been sitting in public code, exploitable by anyone who finds them, for four years. The persistence is an operational signal that remediation — not detection — is the industry's limiting factor.

Why rotation rarely happens:

Credentials are not isolated strings. They are embedded across:

  • Build systems and CI/CD pipelines
  • Multiple repositories (duplicated)
  • Container images (baked in at build time)
  • CI variables and environment configs
  • Vendor and third-party integrations

The short-term choice many teams make is not the safest one — it is the one that avoids breaking anything: do nothing.

NHI policy breach distribution (GitGuardian customer data):

Issue type Share of flagged issues
Long-lived secrets past expiration 60.4%
Internal leaks 17.0%
Duplicated secrets 15.6%
Public leakage 5.2%

Creation velocity is outpacing identity maturity. AI makes it easier to scaffold projects and connect services, but also easier to reproduce insecure patterns at scale when the default move is "just add a key."


Why validation-only prioritization fails

A widespread assumption in secrets security: if a secret cannot be validated — confirmed as currently active — deprioritize it. The 2026 report challenges this directly.

46% of critical secrets are invisible to validation-only tools

Metric Figure
Critical secrets missed by validation-only tools 46%
High-and-above secrets that never get addressed 83%
Validation-only precision rate ~50%
Unvalidatable secrets classified as critical 17,000
Unvalidatable secrets classified as high risk 80,000+

The gap exists because validation coverage is always incomplete:

  • APIs change without notice
  • New services launch constantly
  • Each provider requires dedicated infrastructure to validate
  • Regional and industry-specific platforms proliferate

Ignoring unvalidatable secrets by default is not a conservative strategy. It is a systematic blind spot.

Generic secrets drive half of all critical incidents

"Generic secrets" — private keys, custom API tokens, passwords, and access mechanisms detected through entropy checks and context rather than provider-specific patterns — are routinely deprioritized because they cannot be automatically validated.

The data says this is a mistake:

  • 35% of critical incidents trace back to generic credentials
  • 51% of high-or-critical incidents trace back to generic credentials

The Google joint research (2025): GitGuardian and Google analyzed one million leaked private keys against Certificate Transparency logs. Key findings:

  • 4.5% of leaked keys mapped to trusted X.509 certificates
  • Half had a valid certificate at the time they leaked
  • 4,000+ HTTPS certificates are compromised per year because of a leaked key
  • Affected organizations included multiple Fortune 500 companies and a trusted Certificate Authority
  • GitGuardian sent 4,000+ alert emails to 600 organizations — response rate: under 10%

Valid does not always mean dangerous

The inverse problem is equally real. Approximately 10% of valid secrets are inherently low-impact — sandbox tokens, test environment credentials, low-privilege service accounts with access only to trivial data.

Without contextual risk scoring, teams rotate credentials in detection order or validation order, not threat order.

The four capabilities effective secrets security requires:

Capability What it does
Enrichment Understands what each secret unlocks
Context Assesses privilege, scope, and exposure
Risk scoring Prioritizes based on actual business impact
Full coverage Addresses the long tail, not just the validated minority

Teams still operating on validation-only approaches are systematically exposed to exactly the threats that matter most — while burning engineering time on credentials that pose little real danger.


The developer workstation as an overlooked attack surface

The Shai-Hulud 2 supply chain attack gave GitGuardian direct empirical data on what secrets actually look like on developer machines at scale. By compromising npm packages and executing at install time, the malware systematically harvested environment files and ran structured local secret scans across thousands of real machines.

Shai-Hulud 2 findings across 6,943 compromised machines:

Metric Figure
Total secret occurrences 294,842
Unique secrets identified 33,185
Still valid at time of analysis 3,760
Average locations per live secret ~8
Machines with 10+ secrets 44%
Machines with 100+ secrets 5%
CI/CD runners (vs. personal workstations) 59%

Each live secret appeared in roughly eight different locations on the same machine — dotfiles, shell profiles, build outputs, IDE configs, and tool caches. Each copy is an independent vector for theft.

GitHub tokens dominated the validated set:

  • 581 personal access tokens
  • 386 OAuth tokens
  • 104 fine-grained PATs
  • 101 GitLab tokens

Each one enables repository access, workflow manipulation, or lateral movement across the software supply chain. And because 59% of compromised machines were CI/CD runners, this exposure extends well beyond the individual developer into shared build infrastructure.

GitGuardian considers these numbers conservative. The attack only ran where the malicious package was installed. It did not reach agent memory folders, IDE caches, or the growing surface area of AI-generated artifacts that now routinely include credentials.

CTA Image

Passwork is available as a self-hosted solution with full control over your data. It replaces shared .env files, credentials pasted into chat, and tokens baked into CI/CD configs — with a structured vault, role-based access, and a full audit log. Explore Passwork's deployment options


The path forward: From reactive detection to NHI governance

Detection catches what already exists. The goal is to get ahead of exposure — complete visibility into every credential in the environment: who owns it, what it accesses, and whether it should still exist at all. That shift, from chasing leaks to governing non-human identities, is what NHI governance means in practice.

Step 1. Centralize secrets in vault platforms

Make the vault the source of truth. When teams can reliably retrieve credentials from a single, access-controlled location, they stop inventing fragmented storage strategies — one of the primary drivers of secrets sprawl. Passwork's vault structure is designed for exactly this: organized, encrypted, and access-controlled storage for API keys, database passwords, certificates, SSH keys, and service account credentials.

Step 2. Automate rotation

If a secret must exist, it should not live forever. Regular credential replacement shortens the window an attacker can exploit a leaked secret and forces teams to treat credentials as objects with a lifecycle, not one-time setup tasks. Rotation is far simpler once a vaulting strategy is in place — the vault becomes the single location to update, rather than hunting down every place a credential was copied.

Step 3. Fix developer workflows

Developers hardcode secrets because it is the fastest way to get code working. Remove the need for shared .env files and copied tokens by making vault-based credential retrieval equally fast. The secure path has to be easier than the insecure one.

Step 4. Shift scanning earlier

Pre-commit scanning and workstation-level detection stop incidents before they land anywhere permanent. Modern scanning tools provide actionable signal with low false-positive rates — a significant improvement over the noisy tools that eroded developer trust in the past.

Step 5. Move toward identity-based authentication

The long-term exit from long-lived static secrets is short-lived, identity-driven access. Frameworks like SPIFFE, implemented by open-source SPIRE, replace shared-string authentication with strongly attested workload identity. Each workload receives just-in-time, short-lived credentials rather than a static key that can be copied, leaked, and exploited for years.

Three principles for 2026

Principle What it means in practice
Treat internal repos as first-class leak sources Apply the same remediation rigor to internal findings as to public ones. The highest-value credentials live in private systems.
Extend detection beyond code Scan Slack, Jira, and Confluence. Scanning only repositories misses a quarter of total exposure.
Eliminate hardcoded secrets entirely Remove the root cause: long-lived, static credentials living in code, configs, and chat logs instead of secrets management systems.

Three governance questions every organization must answer

  1. What non-human identities exist in our environment?
  2. Who owns them?
  3. What can they access?

If any of those questions cannot be answered with confidence, AI adoption is outpacing security posture.


Key takeaways for IT and security teams

Key takeaways for IT and security teams

The 2026 GitGuardian report is a detailed account of an accelerating problem. Here is what the data demands in practice:

Finding Implication
28.65M new secrets in one year (+34%) Volume is structural — it scales with the codebase, not with carelessness
AI-service secrets up 81%; LLM infra leaking 5× faster than model providers Every new AI integration adds credentials that can and do leak
Internal repos 6× more likely to contain a secret "Private" is not a security control
28% of incidents originate in collaboration tools Scanning only repos misses a quarter of exposure
64% of 2022 secrets still valid in 2026 Remediation, not detection, is the bottleneck
46% of critical secrets missed by validation-only tools Risk scoring requires context, not just a validity check
59% of compromised machines in Shai-Hulud 2 were CI/CD runners The attack surface extends deep into build infrastructure

Organizations that treat credential management as a lifecycle discipline — with centralized vaults, automated rotation, and enforced access control — will be best positioned for the agentic-AI era. Those that treat it as a cleanup task will keep finding that their secrets outlast their security assumptions.

CTA Image

The data is clear: secrets sitting in code, configs, and chat messages are a breach waiting to happen. Passwork gives your team a single, secure place to store, share, and rotate credentials — with role-based access, a full audit log, and self-hosted deployment that keeps everything within your own infrastructure. Try Passwork in your infrastructure

Source: GitGuardian, "The State of Secrets Sprawl 2026," published 2026. All statistics cited in this article are drawn directly from that report.

FAQ

FAQ

What is secrets sprawl?

Secrets sprawl is the uncontrolled proliferation of hardcoded credentials — API keys, passwords, tokens, certificates, and connection strings — across codebases, configuration files, CI/CD pipelines, and collaboration tools. It occurs when credentials are created faster than they are tracked, rotated, or revoked, leaving organizations with an expanding inventory of exploitable access paths they cannot fully account for.

How many secrets were leaked on GitHub in 2025?

GitGuardian detected 28.65 million new hardcoded secrets in public GitHub commits in 2025 — a 34% increase over 2024 and the largest single-year jump ever recorded. That figure covers only public repositories. Internal repositories, collaboration tools, and self-hosted infrastructure add substantially to the total exposure.

What percentage of leaked secrets are never revoked?

64% of secrets confirmed as valid in 2022 were still valid and exploitable as of January 2026, meaning they had been sitting in public code for four years without being rotated or revoked. The validity rate was approximately 70% when the same dataset was retested in January 2025, showing only a gradual decline despite four years of exposure.

Why are internal repositories more dangerous than public ones?

Internal repositories are 6× more likely to contain a hardcoded secret than public ones — 32.2% vs. 5.6% in 2025. Teams tend to be less cautious within a closed perimeter, assuming that a private repository is inherently safe. Internal repos also hold the most valuable credentials: CI/CD tokens, cloud access keys, and database credentials — exactly what attackers target once they establish a foothold.

How does AI-assisted coding increase the risk of secrets leaks?

AI coding assistants generate larger commits with more code per change, increasing the surface area for credential exposure. Claude Code co-authored commits leaked secrets at 3.2% — more than double the 1.5% baseline across all public GitHub commits. LLM infrastructure secrets are leaking 5× faster than core model provider secrets. Each new AI service integration adds credentials that can be hardcoded, copied, and leaked.

What are MCP configuration files and why do they leak secrets?

Model Context Protocol (MCP) is the standard for connecting large language models to external tools and data sources. MCP configuration files define how an AI agent connects to databases, APIs, and services — and those connections require credentials. In 2025, GitGuardian found 24,008 unique secrets in MCP config files on public GitHub, with 2,117 confirmed valid. Official MCP setup guides often normalize hardcoding credentials directly into config files, which propagates the problem.

What is validation-only prioritization and why does it fail?

Validation-only prioritization is the practice of deprioritizing secrets that cannot be confirmed as currently active. It fails because 46% of critical secrets cannot be validated — they belong to services without validation checkers, or to generic credential types like private keys and custom tokens. Teams using this approach miss nearly half their most dangerous leaks while spending remediation effort on low-risk valid credentials. Effective prioritization requires contextual risk scoring, not just a validity check.

What is NHI governance and why does it matter?

Non-human identity (NHI) governance is the practice of managing the full lifecycle of machine identities — service accounts, API keys, agent tokens, and other non-human credentials — with the same rigor applied to human user accounts. It answers three questions: what NHIs exist in the environment, who owns them, and what can they access. As AI-assisted development accelerates credential creation, NHI governance is the discipline that prevents creation velocity from permanently outpacing security posture.

OAuth token theft and credential attacks: April 2026 review
APT28 hijacked 18,000 routers to steal OAuth tokens. Storm-2372 bypassed MFA without touching a password. 28.6 million secrets leaked on GitHub. April 2026’s biggest incidents — and what they have in common.
Inside real supply chain attacks: Bitwarden CLI, Axios, and Vercel
Why breach your network when attackers can compromise a trusted dependency with millions of downloads and slip silently into thousands of organizations at once? Three 2026 campaigns prove supply chain attacks are no longer isolated incidents.
Password chaos: Why it’s a business problem and how to fix it
A forgotten password costs $70. A breach costs $4.44 million. Both start the same way — credentials shared over Slack, stored in spreadsheets, never rotated. Here’s what password chaos actually costs and how to eliminate it.