Andrej Karpathy coined the term in early 2025: describe what you want, let the AI generate the code, keep going until it works. He called it vibe coding. Within months, an entire product category followed — Lovable, Bolt, v0, Replit — all promising working software from plain-language prompts.
The pitch is not wrong. A non-technical founder can now build and deploy a functional web app in a weekend. That is a genuine change in what is possible.
The question a real business has to answer is different: what did you actually build, and what happens next?
I. What vibe coding actually produces
The output of a vibe coding session is working code. It passes the visual test. The button does what the button is supposed to do.
What it does not reliably produce is secure code, documented code, or code that is easy to maintain as the product grows.
Independent benchmarks have recorded AI-generated codebases carrying a bug density roughly 1.7 times higher than equivalent human-authored code. That gap does not stay theoretical. It surfaces as unexpected failures under load, as data exposure that only appears in specific edge cases, and as authentication logic that works in testing but has a quiet flaw no one noticed because no one read the implementation.
The developers building serious products with AI assistance understand this. The workflow that works — describe intent, generate a draft, review the output, understand what shipped — is not vibe coding in the original sense. It is AI-assisted development with a human still accountable for every line.
The version that gets non-technical founders into trouble is closer to the original promise: generate, run, repeat, ship.

II. The three risks that matter most
Security surface you do not control. AI agents generate code that handles authentication, session management, and data input. Recent audits of AI-built applications found recurring patterns of improper password handling and Cross-Site Scripting vulnerabilities. These are not exotic attack vectors. They are the basics. If no engineer reviewed the output with security in mind, the basics are often missing.
No documentation, no handoff. A vibe-coded application has no architecture documentation because no one wrote it — the AI generated the structure and moved on. That is manageable when the person who built it is still running it. It becomes a serious problem during a technical audit, during a funding round with technical due diligence, or when a developer is brought in to extend the product and has to reconstruct intent from code alone.
Maintenance compounds fast. Every new feature added through an AI agent increases the complexity of the existing code without necessarily increasing anyone's understanding of it. The common trajectory: the first few months feel productive. Then something breaks in a way that is hard to isolate, and the time spent patching starts to exceed the time spent building. The codebase becomes something the founder manages around rather than with.
III. Where vibe coding is genuinely useful
The risks above apply specifically to production systems carrying real user data, payment flows, or business-critical logic.
For prototyping, vibe coding is the right tool. Building a demo to validate a hypothesis before committing engineering resources is exactly what the technology is suited for. A landing page with a waitlist form. A clickable prototype for investor conversations. An internal automation that handles low-stakes data. These are reasonable applications.
The line is crossed when a prototype becomes the product without anyone reviewing what it is actually doing underneath.
Startups often cross that line without realising it. The MVP shipped on a deadline. It got traction. New features were added through the same workflow. Six months later, the codebase is running a real business and no one with engineering accountability has ever read it end to end.
IV. The EU compliance layer

The regulatory context in Europe adds a dimension that most vibe coding guides are written to ignore.
The EU AI Act categorises certain AI-assisted applications as high-risk. High-risk systems require documentation — of the system's design, its data flows, its decision logic, and the controls in place. AI agents do not generate this documentation. The systems they produce will not pass a compliance review without it.
GDPR is the more immediate concern for most SMEs. Personal data handling — who has access, where it is stored, what the retention policy is, how it is encrypted in transit and at rest — needs to be explicit. In a vibe-coded application, these decisions were made by the model, not by a human with legal accountability. That distinction matters when a data subject files a complaint or a regulator asks for documentation.
For startups targeting enterprise clients, SOC 2 readiness is often a commercial requirement. Building toward SOC 2 on top of an undocumented, AI-generated codebase is possible but expensive. The compliance infrastructure that comes embedded in established platforms costs between €30,000 and €150,000 to rebuild from scratch.
V. What a responsible approach looks like
The technology is not the problem. The missing layer is accountability.
A startup that wants to use AI-assisted development responsibly needs three things in place:
An engineer who reads the output. Not line by line on every iteration, but with enough regularity to catch structural problems before they compound. This can be a fractional technical resource, a part-time CTO, or an external audit at defined intervals. It cannot be no one.
A basic security review before launch. Input validation, authentication flows, data storage, API exposure. These take hours to check and potentially years to repair if left unchecked.
Documentation of what was built. Not exhaustive. Enough that a developer new to the codebase can understand its structure, its data flows, and the reasoning behind major decisions.
None of this eliminates the speed advantage of AI-assisted development. It just ensures that what ships is something the business actually controls.
VI. The question founders should ask
Before a vibe-coded application handles real user data, processes real payments, or becomes the foundation a business depends on, one question is worth asking: if an engineer I trusted were to review this codebase tomorrow, what would they find?
If the honest answer is "I don't know," that is the risk the business is currently carrying.
The speed that vibe coding offers is real. The cost of carrying an unknown security and compliance posture into a growing product is also real. Getting an independent technical assessment before those costs compound is not a sign of distrust in the tools — it is standard operating practice for any software that matters.
Batista Consulting runs technical audits for AI-first and vibe-coded startups across Europe. If you want an honest assessment of what your codebase is actually doing, the calendar is open: batistaconsulting.eu/contact

