Skip to content
From Vibe Coding to Insecure Keys
← ← Back to Thinking AI

From Vibe Coding to Insecure Keys

Not everyone who builds software should ship software.


There's a moment that happens in every vibe coding session. You type something like "build me a login system with Stripe payments" into Cursor or Lovable, the AI spits out a working app, and you feel invincible. The code runs. The buttons click. The payments go through. You're a developer now.

Except you're not. And the people who will use your app are about to find that out the hard way.

What Is Vibe Coding, Exactly?

The term was coined by Andrej Karpathy in early 2025 and quickly became Collins Dictionary's Word of the Year. The idea is simple: you describe what you want in plain English, an AI generates the code, and you ship it without reading every line. "Fully give in to the vibes," as Karpathy put it, "and forget that the code even exists."

By 2026, 92% of US developers use AI coding tools daily. GitHub reports that 46% of all new code is now AI-generated. Among Y Combinator's Winter 2025 batch, one in five startups had codebases that were over 91% AI-generated.

The acceleration is real. The problem is who's accelerating — and where they're headed.

The People Who Get Hurt Aren't the Ones Writing the Prompts

Here's what most vibe coding discussions get wrong: they focus on the developer. "Will vibe coders learn to code properly?" is the wrong question. The right question is: what happens to the people who download the app, enter their credit card, and trust it with their data?

Because right now, the answer is: nothing good.

In January 2026, a platform called Moltbook launched as a social network for AI agents. The founder publicly stated he didn't write a single line of code — the entire thing was vibe coded. Within three days, security researchers discovered the app had exposed 1.5 million API authentication tokens, 35,000 email addresses, and thousands of private messages to the open internet. The root cause wasn't a sophisticated hack. It was a misconfigured database that the AI set up with public access during development and nobody ever changed.

This isn't an isolated case. A study by Escape.tech scanned 5,600 vibe-coded applications and found over 2,000 vulnerabilities, more than 400 exposed secrets (API keys, credentials, tokens sitting in readable code), and 175 instances of personally identifiable information leaking through app endpoints. Nearly half of all AI-generated code samples fail basic security tests.

The users of these apps didn't sign up to be beta testers for someone's weekend experiment. They signed up to use a product. And their data paid the price.

How AI Makes Code That Works But Isn't Safe

Understanding why this happens requires understanding what AI coding tools are optimized for: making things work. Not making things safe. Not making things maintainable. Just making the error message go away.

When you tell an AI "I'm getting a Permission Denied error on my database," it doesn't think about security implications. It thinks about making the error disappear. So it might generate a database policy that grants public access to everything. The error is gone. The app works. And your entire user database is now readable by anyone with a browser.

Here are the patterns that repeat across nearly every vibe coding disaster:

Hardcoded secrets. AI assistants routinely generate code with API keys, database passwords, and tokens written directly into source files. When that code hits a repository — even a private one — those secrets are one leak away from being exploited. The Moltbook breach started exactly this way.

Missing access controls. In one documented case, AI-generated code for over 170 production applications inverted the access control logic entirely: authenticated users were blocked while unauthenticated visitors had full access. The code passed visual review. It looked correct. Only a proper security test would have caught it.

Client-side security. A startup called Enrichlead built their entire platform with AI tools. The interface was polished and professional. But all security logic lived in the browser. Within 72 hours of launch, users discovered they could bypass the paid subscription by changing a single value in their browser's developer console. The project shut down entirely.

Dangerous defaults. AI models are trained on a massive corpus of code, and they gravitate toward whatever makes things work fastest. That means using dangerouslySetInnerHTML without sanitization, disabling SSL verification, or setting CORS to accept everything. These shortcuts are fine in a tutorial. They're catastrophic in production.

The Real Cost: It's Not Technical Debt

Technical debt is a term for engineers. The real cost of insecure vibe-coded apps is measured in something much more tangible: people's money, people's data, and people's trust.

When a solo founder ships a vibe-coded SaaS and someone's Stripe API key gets exposed, real credit cards get charged by real scammers. When a health app leaks patient information because the AI didn't implement proper authorization, real people face real consequences. When a B2B tool exposes its entire customer database because the admin route was "protected" by simply hiding the link in the UI, real businesses lose real competitive information.

The founder moves on to the next project. The users deal with the fallout.

A security researcher who demonstrated a zero-click vulnerability in the Orchids vibe coding platform — one that gave him full remote access to a BBC journalist's laptop — had sent twelve warning messages to the company before going public. The company said they "possibly missed" the messages because their team was "overwhelmed."

Overwhelmed by building fast. Not by building safe.

"But I'll Just Ask the AI to Fix It"

This is the most dangerous assumption in vibe coding: that the same tool that created the vulnerability can fix it. But AI coding assistants don't have security context. They don't know your threat model. They don't understand what data is sensitive in your specific application. They optimize for one thing: making the code run.

Asking an AI to "make this secure" is like asking a construction worker who only knows how to pour concrete to also do the electrical wiring. They'll do something. It might even look right. But you probably don't want to flip that switch.

The research backs this up. A December 2025 analysis of 470 open-source pull requests found that AI co-authored code contained 1.7 times more major issues than human-written code, with security vulnerabilities appearing at 2.74 times the rate. Code that compiles and runs at a 90% success rate — but the security of that code hasn't improved at the same pace.

So What Should Non-Technical Builders Actually Do?

I'm not going to tell you to stop using AI to build things. That ship has sailed, and honestly, it shouldn't come back. AI coding tools are genuinely powerful, and they're only getting better. But if you're building something that other people will use — especially if they'll trust it with their data or their money — here's the minimum:

Don't ship what you don't understand. If you can't explain what your code does with user data, you're not ready to ship. Use AI to build. Use a human to review.

Get a security scan before you launch. Tools like GitGuardian and TruffleHog can automatically detect exposed secrets in your code. They're not perfect, but they catch the most obvious disasters. This takes minutes, not days.

Treat your AI like an intern, not a senior engineer. It writes code fast. It doesn't write code safely. Every output needs review, especially anything involving authentication, database access, or payment processing.

Pay for a security review. If your app handles user data, a professional security audit is not optional — it's the cost of doing business. It's cheaper than a breach.

Use environment variables for secrets. Never hardcode API keys. Ever. Add .env to your .gitignore before your first commit. This is basic hygiene that AI tools consistently get wrong.

The Bottom Line

Vibe coding has democratized software creation, and that's genuinely exciting. But democratizing creation without democratizing responsibility is how you get a million apps with exposed databases, leaked credentials, and users who never consented to being part of someone's learning curve.

The AI didn't fail these users. The people who shipped unreviewed code failed these users.

Build fast. But ship responsibly. The people using your app are counting on it.


A note from the author: I believe AI is the future of software development. These tools will continue to evolve, security practices will mature, and many of the problems described here will eventually be solved by better AI, better tooling, and better education. But "eventually" doesn't help the people whose data is exposed today. Being optimistic about where AI is going doesn't mean we should ignore where it currently falls short. If anything, being honest about today's limitations is how we get to a better tomorrow faster.


Ion Anghel is the founder of TEN INVENT, a software consultancy and product studio specializing in Laravel, Next.js, AWS, and AI-powered applications. With 20 years in tech, he builds things that work — and makes sure they're safe before they ship.

Originally published on teninvent.ro