Thousands of Vibe-Coded Apps Exposing Corporate, Personal Data: RedAccess
Researchers with an Israeli cybersecurity startup who were examining the trend toward shadow AI 2026-5-7 21:12:59 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Researchers with an Israeli cybersecurity startup who were examining the trend toward shadow AI reportedly discovered that AI tools developers are using to quickly develop software – known as “vibe coding” – are leaking significant amounts of personal and corporate data.

In interview with Axios and WIRED, RedAccess CEO Dor Zvi said that the researchers found about 380,000 publicly accessible applications and other assets that developers had created with tools from Lovable, Base44, Netlify, and Replit, and that about 5,000 of them contained sensitive corporate information.

Axios verified a number of such exposed applications, including a shipping company app that laid out which vessels were going to which ports, internal financial information for a bank in Brazil, and full and unredacted customer service conversations for a cabinet supplier.

There also were customer data and personally identifiable information (PII) exposed, such as patient conversations at a long-term care facility, a personal app created to help plan a Belgium vacation that included hotel and dinner details, and an app used by a hospital to held summaries of conversations between doctors and patients as well as patient complaints.

Security Worries About Vibe Coding

The research puts another spotlight on the growing security concerns regarding agentic AI and such practices as vibe coding.

On the surface, it’s easy to understand the popularity of vibe coding. As Checkmarx researchers noted in a blog post, it allows essentially anyone to create apps without knowing anything about coding using a few prompts.

“It lowers the barrier to entry, increases development speed, and helps teams prototype and ship faster,” the Checkmarx researchers wrote. “But the same acceleration that makes vibe coding attractive also creates new security pressure. AI can introduce insecure logic, unsafe dependencies, weak access controls, or exposed secrets at a pace that quickly outstrips manual review. In other words, the more software is created at machine speed, the more important vibe coding security becomes.”

The risks vary, from unsafe configurations to reduced auditability and governance to – in this case – insecure AI-generated code.

‘Risks are Not Theoretical’

“Vibe coding security risks are not theoretical,” they wrote. “When AI generates code without strong guardrails, teams can inherit the same classes of issues security teams already know well, only faster and at greater scale.”

RedAccess’ findings come less than two weeks after a coding agent – Cursor running Anthropic’s Claude Opus 4.6 model – deleted PocketOS’ entire production database and all volume-level backsup through a single API call to Railway, the company’s infrastructure providers. And it did so in nine seconds.

RedAccess CEO Zvi told Axios that “the concept of people just creating something that simply, and using it in production … on behalf of their company without getting any permission — there is no limit.”

He added that his mother vibe codes with Lovable and “I don’t think she will think about role-based access.”

Setting Made Apps Accessible by Default

According to Axios, RedAccess researchers found that privacy settings on some vibe-coding tools were set in a way to make the apps accessible, with the responsibility falling to the users to manually change them to private if desired.

Zvi added that many of the applications were indexed by Google and other search engines, which means anyone browsing the internet could come across them.

“The end result is that organizations are actually leaking private data through vibe-coding applications,” Zvi told WIRED. “This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world.”

Pushing Back

The vibe-coding tool makers pushed back, with executives from each arguing that they take reports about such situations seriously and that the existence of such publicly available apps doesn’t necessarily mean that there was a breach or security flaw.

“Replit allows users to choose whether apps are public or private,” Replit CEO Amjad Masad wrote in a post on X. “Public apps being accessible on the internet is expected behavior. Privacy settings can be changed at any time with a single click.”

Masad and executives with the other AI tool makers also criticized how RedAccess alerted them to the issue. He wrote that the cybersecurity firm contacted Replit less than 24 hours before talking with the media, and spokespeople with both Base44 and Lovable told Axios that RedAccess in their reports to the companies did not include the URLs that would have helped them investigate and verify the findings.

“Vibe Coding is a rapidly developing space, and we take our responsibility to both provide tools to create secure apps and educate our customers very seriously,” Masad wrote, noting that in recent weeks, Replit had released two security products, Security Agent and Auto-Protect.

Recent Articles By Author


文章来源: https://securityboulevard.com/2026/05/thousands-of-vibe-coded-apps-exposing-corporate-personal-data-redaccess/
如有侵权请联系:admin#unsafe.sh