As AI continues to grow in importance, ensuring the security of AI services is crucial. Our team at Sonrai attended the AWS Los Angeles Summit on May 22nd, where we noted how big of a role AI is going to play in 2024. In fact, according to summit presentations, 70% of top executives said they are exploring generative AI solutions. With this in mind, we’ve tallied together a list of AWS AI services that have sensitive permissions. We hope your teams can use this to install policies and procedures for safeguarding these permissions.
Description: Grants permission to apply a guardrail
MITRE Tactic: Impact
Why is it sensitive?
This permission allows users to set or modify boundaries on AI model behaviors. Misuse can result in improperly configured guardrails that either over-constrain the model, hindering its functionality, or under-constrain it, exposing the organization to compliance and safety risks.
Description: Grants permission to delete a guardrail or its version
MITRE Tactic: Impact
Why is it sensitive?
Deleting a guardrail can remove critical protections, leaving AI models without necessary operational boundaries. This can lead to models behaving unpredictably or violating regulatory requirements, posing significant risks to the organization. Additionally, it can allow broader data access.
Description: Grants permission to update a guardrail
MITRE Tactic: Impact
Why is it sensitive?
Updating a guardrail allows modifications to the constraints and rules governing AI models. If misused, it can weaken security measures or create loopholes, leading to potential compliance violations and operational disruptions.
Description: Grants permission to create a plugin
MITRE Tactic: Persistence
Why is it sensitive?
Creating a plugin can introduce new functionalities, some of which might be malicious, allowing persistent access or data exfiltration.
Description: Grants permission to create a user
MITRE Tactic: Persistence
Why is it sensitive?
Creating a user can provide an attacker with a new identity to maintain persistent access and perform unauthorized activities without detection.
Description: Grants permission to update a plugin
MITRE Tactic: Persistence
Why is it sensitive?
Updating a plugin can modify its behavior, potentially introducing malicious code or altering functionalities to bypass security measures.
Description: Grants permission to create a code repository
MITRE Tactic: Persistence
Why is it sensitive?
Creating a code repository can allow an attacker to store and execute malicious code within the AI environment, maintaining persistent control.
Description: Grants permission to create a presigned domain URL
MITRE Tactic: Initial Access
Why is it sensitive?
This permission can be used to generate URLs that provide temporary access to resources, potentially allowing unauthorized users to gain entry.
Description: Grants permission to create a user profile
MITRE Tactic: Persistence
Why is it sensitive?
Creating a user profile can help an attacker establish and maintain a foothold within the system, enabling ongoing malicious activities.
Description: Grants permission to put a model package group policy
MITRE Tactic: Privilege Escalation
Why is it sensitive?
Setting a model package group policy can elevate privileges, allowing an attacker to gain more control over AI resources and operations.
Description: Grants permission to create a resource policy
MITRE Tactic: Defensive Evasion
Why is it sensitive?
Creating a resource policy can be used to evade detection by altering access controls and permissions, masking malicious activities.
Description: Grants permission to update a resource policy
MITRE Tactic: Defensive Evasion
Why is it sensitive?
Updating a resource policy can modify access controls, potentially allowing an attacker to evade security measures and maintain undetected access.
Description: Grants permission to put a project policy
MITRE Tactic: Persistence
Why is it sensitive?
Setting a project policy can control access to AI resources, allowing an attacker to maintain persistent access or disrupt normal operations.
Description: Grants permission to create an endpoint
MITRE Tactic: Persistence
Why is it sensitive?
Creating an endpoint can enable persistent access to AI services, potentially exposing sensitive data and operations.
Description: Grants permission to put a resource policy
MITRE Tactic: Persistence
Why is it sensitive?
Setting a resource policy can control access and permissions, helping an attacker maintain a foothold within the system.
Description: Grants permission to create an access control configuration
MITRE Tactic: Persistence
Why is it sensitive?
Creating an access control configuration can help an attacker establish and maintain access, potentially leading to unauthorized actions.
Description: Grants permission to update an access control configuration
MITRE Tactic: Persistence
Why is it sensitive?
Updating an access control configuration can modify permissions and controls, helping an attacker maintain undetected access.
Description: Grants permission to add a policy statement
MITRE Tactic: Lateral Movement
Why is it sensitive?
Adding a policy statement can extend permissions and access, allowing an attacker to move laterally within the network.
Description: Grants permission to delete a policy statement
MITRE Tactic: Impact
Why is it sensitive?
Deleting a policy statement can remove critical security controls, increasing the risk of unauthorized access and actions.
Description: Grants permission to put a policy
MITRE Tactic: Lateral Movement
Why is it sensitive?
Setting a policy can modify access controls, enabling an attacker to move laterally and potentially escalate their privileges within the system.
While IAM measures are just one of many ways to protect AWS AI services, permissions management will greatly reduce your chances of attackers exploiting your AI services. Sonrai’s Cloud Permissions Firewall will disable AI services your organization is not using to give assurance they cannot be exploited, and creates policies restricting the most sensitive permissions associated with the service. Development is not interrupted because identities who need the access are exempted in the policy.
*** This is a Security Bloggers Network syndicated blog from Sonrai | Enterprise Cloud Security Platform authored by Tally Shea. Read the original post at: https://sonraisecurity.com/blog/safeguarding-aws-ai-services-protecting-sensitive-permissions/