found this breakdown that references radware's research on AI-generated code security.
key findings:
AI errors are disproportionately high-severity (injection, auth bypass) vs human errors (typos, null checks)
"hallucinated abstractions" — AI invents fake helper functions that look professional but are fundamentally broken
"slopsquatting" — attackers registering hallucinated package names with malicious payloads
"ouroboros effect" — AI training on AI-generated flawed code, permanently declining security baseline
here's the [full case study]
the framing around maintainer burnout is interesting too — open source is getting flooded with AI PRs that take 12x longer to review than to generate.