Recently our team has written several papers and blogs focused on securing AI. What you will not see in these papers is anything to do with robot rebellion or some such long-term potential threats. We also don’t touch on responsible AI and AI ethics because frankly there are many (and I mean … MANY!) experts on this here and they’re not us. However, we do cover the challenges and problems real organizations are starting to face today in their AI projects.
Below is my curated list of favorites with quick explanations and notes on who are the best for. Enjoy!
Papers:
- “Securing AI: Similar or Different?” (summary blog) covers the differences and similarities of securing AI vs securing traditional systems (best for security leaders and their teams who are about to be tasked with securing AI use at their companies)
- “Google Cloud’s Approach to Trust in Artificial Intelligence” (summary blog) discusses some elements that make AI more trustworthy (best for those curious about trust in AI)
- “A quick guide to implementing the Secure AI Framework (SAIF)” (summary blog) describes a secure AI framework (SAIF). It outlines six key elements to consider when implementing AI (best for security professionals wanting to learn about SAIF)
- “Why Red Teams Play a Central Role in Helping Organizations Secure AI Systems” (summary blog) covers the new (well, not new for us here, so “new-ish”) art and practice of AI red teaming, that is very different from traditional security red team work (best for both security people who want to learn how to red team AI and AI developers learning about security testing)
- “Best Practices for Securely Deploying AI on Google Cloud” (summary blog) covers our new AI risk taxonomy (!) and a few practices for security of AI workloads on Google Cloud (best for those who plan to use Google Cloud for AI work and need to secure it)
Blogs:
- “Gen AI governance: 10 tips to level up your AI program” offers best practices for governing the use of generative AI, including identifying stakeholders and defining guiding principles.
- “Here are 5 gen AI security terms busy business leaders should know” defines five key generative AI security terms that busy business leaders should understand to navigate the potential risks and benefits of this technology.
- “From turnkey to custom: Tailor your AI risk governance to help build confidence” covers shared responsibility for securing AI across several scenarios (very useful!)
- “Spotlighting ‘shadow AI’: How to protect against risky AI practices” covers the differences between “enterprise” AI and consumer AI (but mostly genAI) use in the organization
- “Securing AI: What You Should Know” (a byline) touches upon the use of SAIF in the real world.
(source)
Related blogs:
Our Security of AI Papers and Blogs Explained was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.
*** This is a Security Bloggers Network syndicated blog from Stories by Anton Chuvakin on Medium authored by Anton Chuvakin. Read the original post at: https://medium.com/anton-on-security/our-security-of-ai-papers-and-blogs-explained-7e50afc0469b?source=rss-11065c9e943e------2