Vibe Coding Security Risks: What Happens When AI Writes All the Code

"Vibe coding" is the practice of describing what you want in natural language and letting an AI model generate the code, with minimal or no human review of the output. The term was coined by Andrej Karpathy in early 2025 and has since become both a cultural phenomenon and a growing security concern.

Coding agents optimize for making code run, not making code safe. 20% of vibe-coded applications have serious vulnerabilities or configuration errors.

What Makes Vibe Coding Dangerous

Vibe coding is not AI-assisted development. It is AI-delegated development. The developer describes intent and accepts output without deeply reviewing the implementation. That creates a fundamentally different risk profile.

  • When developers vibe code, they describe features, not security constraints. The AI model gets no instruction to validate input, sanitize output, or avoid hardcoded credentials.
  • AI-generated code looks professional. It compiles, runs, and passes basic tests. But it silently introduces vulnerabilities that are invisible without security-focused review.
  • The core appeal is speed: ship fast, iterate later. Security becomes something to address after launch, if at all.
  • AI models do not know your approved dependencies, your security policies, or your compliance requirements. They generate plausible code based on training data patterns.

The Security Debt

Towards Data Science calls this the "vibe coding security debt crisis." Organizations are building products on AI-generated code with minimal review, accumulating security debt at a rate that has no historical precedent.

  • Injection vulnerabilities are the most common issue. AI models default to string concatenation for database queries and command execution.
  • Hardcoded API keys, tokens, and database passwords show up consistently in vibe-coded applications.
  • AI models suggest dependencies without verifying versions, licenses, or known vulnerabilities, creating supply chain risk.
  • The Enrichlead incident is a good example: a platform whose founder boasted "100% AI-written code" was found to allow anyone to access paid features or alter data.

Vibe Coding in the Enterprise

Vibe coding is not limited to side projects and startups. Enterprise developers do it too, using Cursor's agent mode, Copilot chat, and Claude Code to generate large blocks of code from natural language descriptions. The difference is the blast radius.

  • A vulnerability in a weekend project affects one user. A vulnerability in enterprise production code affects millions.
  • Enterprise code must meet compliance requirements (SOC 2, HIPAA, FedRAMP, EU AI Act) that vibe coding ignores entirely.
  • Enterprise codebases have architectural decisions, approved patterns, and security policies that AI models have no awareness of.

The Fix Is Not "Stop Vibe Coding"

Vibe coding is too productive to ban. The fix is governance at the point of generation. Instead of asking developers to manually review every line of AI output (which defeats the purpose), you insert a governance layer between the AI tool and the model. That layer enriches the prompt with your security policies, validates the response against your rules before code reaches the developer, and logs everything for compliance.

This is what we built Unyform to do. Developers keep vibe coding at full speed, but every line of generated code is governed, secure, and auditable.

We have more on the security data here, and a broader primer on what AI code governance actually is.

Vibe coding is one of several risks of AI-generated code that proactive governance addresses.

Sources: Towards Data Science (2026), Lawfare (2026), Kaspersky (2025), Retool (2026), Veracode (2025), Contrast Security (2026).