everyone’s talking about YC W26, the demos, funding, how fast teams are shipping
but something I don’t see being discussed much is what’s actually inside these codebases
a lot of startups now are building insanely fast with AI
last year there were reports that a big chunk of projects were heavily AI-generated
that’s not even a criticism, it’s just how things are done now
but the security side feels like it hasn’t caught up at all
some stats I came across recently:
around 45% of AI-generated code failing security tests
XSS protections failing more often than not
log injection attempts succeeding in most cases
thousands of vibe-coded apps with exposed secrets and PII sitting there
and we’re already starting to see real breaches from this, not just theory
the pattern feels pretty consistent:
AI helps teams ship fast
scanners generate loads of findings
developers ignore most of it
vulnerabilities make it to prod
problems show up later
curious how people here are thinking about this
are teams actually taking security seriously when building fast with AI, or is it mostly just ship now and deal with it later?