we do security reviews for pretty much every feature that touches user data or external APIs
sounds good in theory, but in practice it's created this weird dynamic
the reviews happen for everything, so they become a bottleneck
simple stuff like "add a new field to the user profile API" gets the same review process as "integrate with a third-party payment processor"
so teams start finding ways around it
they'll break big changes into smaller PRs that individually don't trigger review requirements
or they'll implement the risky parts first without the security flag, then add the security-sensitive bits in a follow-up that looks minor
the result is that we're spending review cycles on low-risk changes while the actually dangerous stuff gets architected to avoid the review process entirely
it's like having airport security that makes everyone take off their shoes but waves through people with diplomatic passports
been thinking there has to be a better way to do this
maybe reviews should be based on actual risk factors rather than just "does this touch X system"
or maybe the review process itself needs to be way faster for obvious cases
how do other teams handle security reviews without making them a universal slow-down?
do you have different review tracks for different risk levels, or does everything go through the same process?we do security reviews for pretty much every feature that touches user data or external APIs
sounds good in theory, but in practice it's created this weird dynamic
the reviews happen for everything, so they become a bottleneck
simple stuff like "add a new field to the user profile API" gets the same review process as "integrate with a third-party payment processor"
so teams start finding ways around it
they'll break big changes into smaller PRs that individually don't trigger review requirements
or they'll implement the risky parts first without the security flag, then add the security-sensitive bits in a follow-up that looks minor
the result is that we're spending review cycles on low-risk changes while the actually dangerous stuff gets architected to avoid the review process entirely
it's like having airport security that makes everyone take off their shoes but waves through people with diplomatic passports
been thinking there has to be a better way to do this
maybe reviews should be based on actual risk factors rather than just "does this touch X system"
or maybe the review process itself needs to be way faster for obvious cases
how do other teams handle security reviews without making them a universal slow-down?
do you have different review tracks for different risk levels, or does everything go through the same process?