Cybersecurity Snapshot: AI Security Roundup: Best Practices, Research and Insights
2024-11-29 22:0:0 Author: www.tenable.com(查看原文) 阅读量:3 收藏

In this special edition, we’ve selected the most-read Cybersecurity Snapshot items about AI security this year. ICYMI the first time around, check out this roundup of data points, tips and trends about secure AI deployment; shadow AI; AI threat detection; AI risks; AI governance; AI cybersecurity uses — and more.

ICYMI, here are six things that’ll help you better understand AI security.

1 - Best practices for secure AI system deployment

Looking for tips on how to roll out AI systems securely and responsibly? The guide “Deploying AI Systems Securely” has concrete recommendations for organizations setting up and operating AI systems on-premises or in private cloud environments. 

“Deploying AI systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on premises, cloud, or hybrid),” reads the 11-page document, jointly published by cybersecurity agencies from the Five Eyes Alliance countries: Australia, Canada, New Zealand, the U.K. and the U.S.

The agencies recommend that organizations developing and deploying AI systems incorporate the following: 

  • Ensure a secure deployment environment: Confirm that the organization’s IT infrastructure is robust, with good governance, a solid architecture and secure configurations in place. 
  • Require a threat model: Have the primary developer of the AI system — whether it’s a vendor or an in-house team — provide a threat model that can guide the deployment team in implementing security best practices, assessing threats and planning mitigations.
  • Promote a collaborative culture: Encourage communication and collaboration among the organization’s data science, IT infrastructure and cybersecurity teams to address any risks or concerns effectively.
  • Design a robust architecture: Implement security protections at the boundaries between the IT environment and the AI system; address identified blind spots; protect proprietary data sources; and apply secure design principles, including zero trust frameworks.
  • Harden configurations: Follow best practices for the deployment environment, such as using hardened containers for running ML models; applying allowlists on firewalls; encrypting sensitive AI data; and employing strong authentication.

For more information about deploying AI systems securely:

2 - Dealing with the “shadow AI” problem 

As organizations scale up their AI adoption, they must closely monitor the usage of unapproved AI tools by employees — an issue known as “shadow AI.”

So how do you identify, manage and prevent shadow AI? The Cloud Security Alliance’s “AI Organizational Responsibilities: Governance, Risk Management, Compliance and Cultural Aspects” white paper offers recommendations to tackle shadow AI, including:

  • Creating a comprehensive inventory of AI systems
  • Conducting gap analyses to spot discrepancies between approved and actual AI usage
  • Implementing ways to detect unauthorized AI wares
  • Establishing effective access controls
  • Deploying monitoring techniques

Dealing with the “shadow AI” problem

“By focusing on these key areas, organizations can significantly reduce the risks associated with shadow AI, ensuring that all AI systems align with organizational policies, security standards, and regulatory requirements,” the white paper reads.

For example, to create an inventory that offers the required visibility into AI assets, the document explains different elements each record should have, such as:

  • The asset’s description
  • Information about its AI models
  • Information about its data sets and data sources
  • Information about the tools used for its development and deployment
  • Detailed documentation about its lifecycle, regulatory compliance, ethical considerations and adherence to industry standards
  • Records of its access control mechanisms

Meanwhile, the report “Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024-2025” from the National Cybersecurity Alliance (NCA) adds insights to the issue of employee use AI, with its finding that almost 40% of employees have fed sensitive work information to AI tools without their employers’ knowledge

These findings, according to the NCA, highlight why organizations must urgently adopt AI usage policies and offer AI security training so employees understand the risks of using this technology.

Have you ever shared sensitive work information without your employer’s knowledge?

Employees oversharing with AI tools

(Source: “Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024-2025” study by the National Cybersecurity Alliance, September 2024)

For more information about AI risks to cybersecurity, check out these Tenable blogs:

3 - How AI boosts real-time threat detection

AI has greatly impacted real-time threat detection by analyzing large datasets at unmatched speeds and identifying subtle, often-overlooked, changes in network traffic or user behavior. For example, AI can detect when a system atypically accesses sensitive data. Traditional tools may miss these nuanced anomalies, but AI systems are adept at spotting them.

For security, GenAI can revolutionize the field if applied correctly, especially when it comes to threat detection and response. It enhances efficiency and productivity by swiftly processing and delivering critical information when it matters most,” Nicholas Weeks, a Tenable senior product marketing manager, wrote in a blog post.

One of AI's significant advantages in threat detection is its ability to be proactive. AI-powered systems continuously refine their algorithms as new malware strains and attack techniques emerge, learning from each event and integrating new insights into their threat detection mechanisms. This allows them to respond to both known and unknown threats more effectively than traditional, static, signature-based tools.

How AI boosts real-time threat detection

"There has been automation in threat detection for a number of years, but we're also seeing more AI in general. We're seeing the large models and machine learning being applied at scale," Josh Schmidt, partner in charge of the cybersecurity assessment services team at BPM, a professional services firm, told TechTarget.

In addition to monitoring internal network behavior, AI systems can more comprehensively analyze external sources of intelligence like RSS feeds, cybersecurity forums and global threat data. This wide-reaching capability helps AI gather actionable insights and recommend defense strategies that are tailored to current attack trends. For example, AI can flag a spike in phishing attacks targeting specific industries and suggest measures to counter these emerging threats. 

Additionally, as AI-generated phishing lures become nearly impossible for humans to detect, researchers and operators are turning to AI-based systems to assess if an email was AI-generated by looking for subtle telltales or differences when compared to a legitimate human-sourced email. 

For more information about ways in which AI can help boost cybersecurity programs:

4 - New database aims to round up all AI risks

Finding it hard to track all the cyber risks impacting AI systems? Check out the Massachusetts Institute of Technology’s AI Risk Repository, which aims to consolidate in a single place all risks associated with the use of artificial intelligence.

To compile the database’s initial set of 700-plus risks, MIT analyzed 43 existing AI risk frameworks, and found that even the most comprehensive framework overlooked about 30% of all risks currently listed in the database.

“Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots,” project leader and MIT postdoctoral researcher Peter Slattery said in a statement.

The AI Risk Repository’s risk domains include:

  • AI system safety, failures, and limitations
  • Socioeconomic and environmental harms
  • Discrimination and toxicity
  • Privacy and security
  • Malicious actors and misuse

The risk domains are further subdivided into 23 subdomains. The AI Risk Repository is a “living database” that’ll be expanded and updated, according to MIT.

Meanwhile, the January publication from the U.S. National Institute of Standards and Technology (NIST) “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)” aims to help AI developers and users understand the types of attacks their AI systems can be vulnerable to, as well as ways to mitigate these threats.

Specifically, the publication zeroes in on four attack types:

  • Evasion attacks, which focus on altering an input to trick the AI system into responding erratically to it, such as tampering with a road stop sign to confuse an autonomous vehicle
  • Poisoning attacks, in which corrupted data is fed to an AI system during its training phase, so that its output is erratic, inaccurate and inappropriate
  • Privacy attacks, which are launched during an AI system’s deployment and attempt to uncover confidential training data to then misuse the information
  • Abuse attacks, in which incorrect information is loaded into a legitimate but compromised source of data used by the AI system

Taxonomy of attacks on generative AI systems

New database aims to round up all AI risks

(Source: NIST’s “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)” document, January 2024) 

For more information about protecting AI systems from cyberattacks:

5 - Why cybersecurity pros are warming up to AI’s potential

A majority of cybersecurity professionals feel cautiously hopeful about artificial intelligence’s potential for strengthening their organizations’ cyber defenses, while also recognizing AI’s risks and adoption obstacles.

That’s according to a global survey of almost 2,500 IT and security professionals conducted by the Cloud Security Alliance (CSA).

“While there’s optimism about AI’s role in enhancing security, there’s also a clear recognition of its potential misuse and the challenges it brings,” reads the “State of AI and Security Survey Report,” which was commissioned by Google.

Specifically, 63% of respondents said AI can potentially boost their organizations’ cybersecurity processes. Only 12% felt the opposite way. The rest had no opinion.

Already, 22% of polled organizations use generative AI for security. More than half (55%) plan to use it within the next year, with the top use cases being rule creation, attack simulation and compliance monitoring. C-level and board support is driving generative AI adoption.

Furthermore, 67% have tested AI for security purposes, and 48% feel either “very” or “reasonably” confident in their organizations’ ability to use AI for security successfully.

What are your desired outcomes when it comes to implementing AI in your security team?

Why cybersecurity pros are warming up to AI’s potential

(Source: Cloud Security Alliance’s “State of AI and Security Survey Report,” April 2024)

Meanwhile, in a commissioned study conducted by Forrester Consulting on behalf of Tenable in October 2023, 44% of IT and security leaders polled said they were either “extremely confident” or “very confident” about their ability to use generative AI to enhance their organization’s cybersecurity strategy.

Cyber pros are warming up to AI tools

In addition, 68% of respondents showed some level of interest in using GenAI to align IT/security goals with business goals; and a similar number — 67% — showed interest in using it to increase or improve the way their organization practices preventive cybersecurity.

Why cybersecurity pros are warming up to AI’s potential2

To get more details, check out the CSA report’s announcement “More Than Half of Organizations Plan to Adopt Artificial Intelligence (AI) Solutions in Coming Year” and the full 33-page report “State of AI and Security Survey Report.

For more information about how AI can help cybersecurity teams:

6 - A new GenAI governance guide for your org’s leaders

Here’s a guide that might interest business and tech chiefs eager to ensure their organizations develop and deploy generative AI securely and responsibly.

The Open Worldwide Application Security Project (OWASP) guide “LLM AI Cybersecurity & Governance Checklist” is aimed at business, privacy, compliance, legal and cybersecurity leaders, among others, tasked with setting guardrails for their organization’s generative AI use.

A new GenAI governance guide for your org’s leaders

The goal: Help them stay abreast of AI developments so that their organizations will reap business success from their generative AI use while avoiding legal, security and regulatory pitfalls.

“These leaders and teams must create tactics to grab opportunities, combat challenges, and mitigate risks,” reads the document, which was created by the same OWASP team in charge of the group’s “OWASP Top 10 for LLM Applications” list.

Areas covered by the checklist include:

  • Adversarial risk
  • Threat modeling
  • Asset inventorying
  • Security and privacy training
  • Legal and regulatory considerations

For more information about using generative AI responsibly and securely:

Juan Perez

Juan Perez

Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.


文章来源: https://www.tenable.com/blog/cybersecurity-snapshot-ai-security-roundup-best-practices-research-11-29-2024
如有侵权请联系:admin#unsafe.sh