What publishers should know about “vibe coding”

There hasn’t been a bigger story in software development this year than the rise of “vibe coding”. 

While AI-assisted coding has been around in the engineering community for years, adoption has drastically increased in 2025, and consumer tools like Lovable and Bolt now allow anyone to create interactive websites or apps with natural language prompts. 

This is “vibe coding”, and the market loves it! In July, Lovable hit $100m ARR in just 8 months, making it “the fastest-growing software startup in history”. In May, Cursor, another AI coding startup, raised $900m at a $10bn valuation. Google now says “well over 30%” of its code is AI-generated. 

And for publishers, the mind boggles at the possibilities: The app you’ve always wanted, without the cost. The interactive story that would have taken months and thousands of dollars to build.  

But just like publishers worth their salt wouldn’t risk their readers’ trust on AI-generated content or images, the same goes for code. The risks are just harder to spot. 

How Vibe Coding Works

Code is text-based, making it a natural fit for large language models that excel at recognising and generating text patterns. LLMs also benefit from the vast troves of publicly available code available on platforms like GitHub and Stack Overflow, much of which share standard, repetitive code and frameworks. 

This is why vibe coding tools can be good at creating the basic apps that would largely rely on boilerplate code – things like user login systems, database connections, and common interfaces that follow well-established patterns of coding. 

Vibe coding’s explosive growth was sparked by the emergence of advanced reasoning LLMs — models that can break a larger goal into smaller, manageable steps to mimic a logical chain of thought that can be more effective at solving problems. 

As anyone who has used Lovable to create a quick app can attest, it’s one of the “Wow” moments in AI, and a hell of a lot of fun. 

However, the temptation and danger of vibe coding is that the output looks great, and for all intents and purposes, “works”. 

But start asking for more complex features, or consider making it public to the web, and things can go pear-shaped quickly.

Security Vulnerabilities

This is the big one. Vibe coding tools can be trained on outdated code or libraries with known security vulnerabilities. They can also skip over security measures that protect against the dangers of the open web. This can include: 

  • Input validation to prevent injection attacks that can allow malicious actors to manipulate your database or steal information.
  • Secure data handling, such as preventing sensitive data from being logged in plaintext, stored without encryption, or transmitted over insecure channels. 
  • Robust authentication, such as avoiding predictable password reset tokens or session management that lets attackers hijack user accounts.

Stack Overflow’s 2025 developer survey showed that while 80% of developers are using AI tools in their workflows, trust in their accuracy dropped more than 10% from last year.  

In May, Replit, a competitor of Lovable, said it had found that 170 of the 1,645 apps featured on the Lovable website had critical security flaws. According to Semafor, the apps “allowed anyone to access information about the site’s users, including names, email addresses, financial information and secret API keys.”

We leverage AI tools, but maintain a strict ‘assistance, not automation’ policy. Our engineers are trained to use tools like Cursor to accelerate debugging and scaffold code, but never to ‘vibe code’ which can introduce insecure code and vulnerabilities.

Tim Sheehan, The Code Company
Director of Technology

The risks for enterprise data can come even earlier in a vibe coding journey. Customer data or proprietary code used in chats can be used to train the models of many vibe coding platforms.  In August 2025, Anthropic updated its Consumer Terms and Privacy Policy which meant Claude Coder users would have to proactively opt out of having chats and coding sessions to train Anthropic models. 

While developers are creating processes to reduce security risks, consumer vibe coding tools still pose unacceptable risks for publishers and enterprise use.

Tech Debt

The ease of vibe coding can turbocharge tech debt – the future cost and time it will take to maintain and iterate on suboptimal code and software infrastructure. 

Consumer vibe coding tools prioritise fast solutions to get features to work, without the context of the wider codebase and how features fit into a product’s ecosystem.  It might solve each individual request perfectly, while creating a patchwork of incompatible solutions that can break easily, become more difficult to modify, and create a maintenance nightmare down the line. 

Professional developers constantly refactor and clean up code as they go, while AI often just adds layers: One feature uses this database approach, another uses that authentication method, and suddenly you have a digital Frankenstein.

As highlighted by MIT, studies that show the huge productivity gains by developers using AI were “conducted in controlled environments where programmers completed isolated tasks — not in real-world settings, where software must be built atop complex existing systems.”

In short, don’t skip the fundamentals that have been developed over decades of software development: Careful discovery and infrastructure planning. Fast prototypes is one thing (and a real benefit of considered vibe coding), but fast code for enterprise-grade software out in the wild cannot be prioritised over long-term stability. 

Agents gone wild

AI Agents can act autonomously and access tools and data across a user’s digital environment and are fundamental in many vibe coding tools.  The risks of this are obvious and only compounded in the enterprise. 

While many agents ask for permission first before making changes, things can still go awry. In July 2025, Replit’s CEO apologised after its AI agent deleted a user’s live production database, apparently ignoring directions during a code freeze and creating fake data to replace the lost information of 1,206 executives and 1,196 companies.  While the data was eventually restored, Replit, which in September 2025 raised funds at a $3B valuation, said it would implement changes to prevent it from happening again. 

This repeats the fundamental risk of AI acting in isolation without understanding the full context of a business or the broader implications of its actions. Worst-case scenario, it might “optimise” your database by deleting what they perceive as redundant data, or “fix” a security issue by removing authentication entirely.

Enterprise software has complex interdependencies such as hosting accounts, payment systems, user data and a broad range of interconnected code repositories, proprietary code and SaaS tools. Vibe coding tools are currently not equipped to take that complexity into account. 

STAY TUNED: Next in this series: How the Publishers Can Safely Harness Vibe Coding Tools

How The Code Company Approaches AI Safety & Usage

The Code Company leverages AI tools responsibly, governed by strict data privacy and security protocols to enhance development without introducing risk.

Our AI adoption is guided by a strict data privacy-first principle. We exclusively use enterprise-grade tools that guarantee all proprietary code and customer data are completely exempt from model training, ensuring the absolute confidentiality and security of intellectual property.

We maintain a strict ‘assistance, not automation’ policy for AI in development. Engineers are trained to use tools like Cursor to accelerate debugging and scaffold code, rather than for unassisted generation (or “Vibe Codingˮ). This approach prevents the introduction of insecure, AI-generated code and fosters genuine engineering expertise.

To ensure AI-assisted output aligns with our high standards, our engineering team has developed custom rulesets within our IDE. These proprietary rules enforce our specific coding conventions, security best practices, and performance benchmarks on any generated code, ensuring consistency and quality.

All AI tool usage is subject to direct oversight to ensure compliance with our security and development policies. Engineering leadership actively monitors usage patterns, providing a layer of governance that guarantees our teams are leveraging these powerful tools responsibly and effectively.

Ben May

Ben is Managing Director of The Code Company. He is passionate about working with publishers on clever and innovative ways to solve complex problems. He works with The Code Company team on all projects, bringing his perspective and problem solving skills to deliver great outcomes.