Vibe Coding Has a Security Problem Nobody Is Talking About

Abhishek Gautam··8 min read

Quick summary

Collins Dictionary just named "vibe coding" Word of the Year. Millions of people are using Cursor, Replit Agent, and GitHub Copilot to build and deploy apps without fully understanding the code. The security industry is starting to notice the results.

Collins Dictionary named "vibe coding" its Word of the Year for 2026. The definition they settled on: programming by describing what you want in natural language and having AI generate the code, with the programmer reviewing the result loosely or not at all.

The term captures something real. GitHub Copilot now has over 1.8 million paid subscribers. Cursor has become the default IDE for a significant portion of the developer community. Replit Agent will build and deploy a web application from a description in under an hour. The number of people who have shipped production software without deeply understanding the code they shipped has grown faster in the past two years than in any comparable period in computing history.

The security industry is starting to publish what it has been finding.

What AI coding tools actually do with authentication

Ask most AI coding tools to build a user authentication system and they will give you one. It will probably use a well-known library, it will probably follow the basic flow correctly, and it will probably have several subtle vulnerabilities that a non-expert would not notice.

The specific failure modes that come up repeatedly in security audits of AI-generated authentication code: hardcoded API keys or database credentials in files that end up in version control, JWT tokens that are generated but not properly validated on subsequent requests, password reset flows that are technically functional but susceptible to account takeover through timing attacks, session tokens that are never invalidated on logout, and admin routes that are protected in the frontend but not in the API layer.

None of these are exotic vulnerabilities. They are on the OWASP Top 10. They have been documented for decades. A developer with a few years of experience would catch most of them in code review. A developer asking Cursor to "build me a login system" and accepting the output without review would not necessarily see any of them, because the code works. The login flow accepts the correct password and rejects the incorrect one. Everything appears to function.

The vulnerabilities are in the edge cases, the flows that do not get manually tested, the endpoints that the demo never touches.

The non-developer deployment problem

Traditional software security had a natural gatekeeping mechanism built in. Building a web application required enough technical knowledge that the people doing it generally also understood, at least at a basic level, why SQL injection was dangerous, why you should not store passwords in plaintext, and why your API endpoints needed authentication.

Vibe coding tools have removed that gate. A product manager at a startup can now deploy a customer-facing web application with a database backend over a weekend. An entrepreneur can build a tool that collects payment information without ever having taken a computer science course. A freelancer can deliver a fully functional SaaS product to a client without being able to explain how it works at a security level.

This is not entirely new. WordPress plugins and no-code tools created similar dynamics years ago. The scale and the capability level are new. A vibe-coded application can be genuinely sophisticated, with complex business logic, third-party integrations, and real production load, while containing the kind of fundamental security architecture problems that no-code tools typically would not allow you to create because they handled authentication and data storage themselves.

What the security audits are finding in practice

A report circulating in developer security communities in February 2026, based on audits of applications that were either vibe-coded or heavily AI-assisted, identified several patterns.

API key exposure was the most common finding. AI tools frequently generate code that uses environment variables correctly in the application code but create example files, README documentation, or configuration examples that include real key values. These end up in public GitHub repositories with a regularity that suggests the pattern is extremely common.

IDOR vulnerabilities, where you can access another user's data by changing an ID in a URL or API call, appeared in a high proportion of AI-generated CRUD applications. The AI code correctly implements the basic create/read/update/delete operations but often does not add the authorization check that verifies the requesting user is allowed to touch the specific record being requested.

Mass assignment vulnerabilities, where an API endpoint accepts and processes object fields that should not be user-controllable, were common in AI-generated backends. The model builds what you describe. If you describe a user profile update endpoint, it builds one. If you do not specifically describe filtering which fields can be updated, it often does not add that filtering.

What you should actually check before deploying

If you have shipped or are planning to ship a vibe-coded application with real users, a handful of specific checks will catch the most common problems.

Search your entire codebase and any configuration files for strings that look like API keys, database passwords, or JWT secrets. If any of those values are in your repository, rotate them immediately even if the repository is private.

Test every API endpoint in your application without being logged in, and then while logged in as a lower-privilege user than the action requires. If any endpoint returns data or performs an action it should not, you have an authorization problem.

Look for any place where user input is used to construct a database query. AI-generated code often uses parameterized queries correctly, but it is worth verifying every one.

Check your admin routes, your billing endpoints, and any endpoint that performs a destructive action. Test whether the frontend protection and the backend protection are both present, because they often are not both present.

The nuanced version

This piece is not arguing that vibe coding is bad or that AI coding tools should not be used. The productivity gains are real, the tools are genuinely useful, and the number of people they have enabled to build things they could not have built otherwise is large and broadly positive.

The argument is that the security model of vibe-coded applications requires explicit attention precisely because the tools that make them easy to build do not make them automatically secure. The gap between a working application and a secure application is not visible when you run the demo. It is visible when someone who knows what they are looking for examines the code, or when someone malicious finds the gap before you do.

Claude Code found 500 vulnerabilities in open source codebases in a recent test run. Security scanners are good tools and worth using. But the first step is understanding that your vibe-coded application is worth auditing at all, because it looks and feels like a finished product even if it has serious gaps.

The word of the year celebrates how easy building has become. The security story is about what that ease sometimes costs when the building gets deployed to real users.

ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.