In the fast-paced world of technological innovation, Artificial Intelligence (AI) has emerged as a game-changer, with organizations racing to harness the potential for smarter, faster, and more sophisticated solutions. However, with great power comes great responsibility.
Karen Laughton, executive vice president of Advisory services at Coalfire, emphasizes the significance of the recent AI Executive Order from the Biden administration. “This is a significant first step in establishing guidance for the development and use of AI, and it addresses many of the key concerns raised about AI. The use of AI has increased exponentially in the last year, and it has become a race among organizations to leverage this technology to develop and provide smarter, faster, and more sophisticated solutions.”
“As organizations embark on the AI journey, it is imperative to identify and address the risks associated with the development and deployment of AI technology.”
She concludes, “Maintaining the confidentiality, integrity, and availability of data must be paramount, and proper remediation measures should be implemented.”
AI Risk Management Act
The proposed Federal Artificial Intelligence Risk Management Act of 2023 builds upon the Executive Order, outlining specific measures to ensure responsible AI adoption across federal agencies. Here are some key provisions:
- Guidance for AI Risk Management: The Office of Management and Budget (OMB) will issue guidance requiring agencies to incorporate a comprehensive framework into their AI risk management efforts. This framework will be developed in accordance with guidelines set by the National Institute of Standards and Technology (NIST).
- Workforce Initiative for Expertise: Recognizing the diverse expertise required for effective AI implementation, OMB will establish a workforce initiative. This initiative aims to provide federal agencies with access to a broad range of skills and knowledge necessary for responsible AI development and deployment.
- Procurement Policies for AI Systems: The Administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council will take action to ensure that federal agencies procure AI systems that align with the established Framework. This measure emphasizes the importance of incorporating ethical and secure AI considerations into the procurement process.
- Test and Evaluation Capabilities: NIST will play a pivotal role in the AI landscape by developing test and evaluation capabilities specifically tailored for AI acquisitions. This step ensures that federal agencies can effectively assess the performance, security, and ethical implications of the AI systems they acquire.
Responsible AI practices
“In essence, the Executive Order and the proposed legislation underscore the United States government’s commitment to fostering innovation while safeguarding against potential risks associated with AI.” Says Mike Eisenburg, vice president of Strategy, Privacy, Risk (SPR) Advisory at Coalfire. “As we navigate the future of AI, a collaborative effort between government, industry, and experts is essential to strike the right balance and ensure the responsible development and use of this transformative technology.”
Joe Stallings III, director of SPR Advisory at Coalfire, concurs and adds, “The Act immediately prioritizes the AI Risk Management Framework (RMF) as the go-to framework for managing AI risks for federal agencies and those doing business with federal agencies.”
Strengthening federal AI governance
The draft memo from the OMB aims to establish new agency requirements and guidance, and would require each federal agency to designate a Chief AI Officer within 60 days of the issuance of the memorandum, to convene an AI Governance Board, and to submit a compliance plan within six months.
The OMB memo also prescribes annual risk assessments and managing risks from the use of AI, especially for safety-impacting and rights-impacting AI.
Agencies will also be required to maintain AI use case inventories which will be publicly available, except for AI which may be used as a component of national security.
AI RMF assessment
Although currently limited to federal agencies, it would be considered best practice for private entities to adopt the guidance in the Executive Order, and the key provisions of the Act and OMB memo.
Coalfire is at the forefront of promoting responsible AI practices by introducing a dedicated AI Risk Management Framework (RMF) Assessment. This initiative is designed to help organizations navigate the complexities of AI development, deployment, and maintenance while adhering to the highest standards of security and privacy.
The AI RMF Assessment by Coalfire goes beyond traditional risk management approaches, specifically addressing the unique challenges posed by AI technologies.
AI RMF policy templates
In a bid to empower responsible AI practices, Coalfire is further extending its commitment by offering free AI RMF policy templates. These templates serve as a valuable resource for organizations aiming to establish robust policies and procedures governing the development and deployment of AI solutions.
Responsible AI adoption
As the demand for AI solutions continues to surge, Coalfire remains steadfast in its commitment to ensuring that innovation goes hand in hand with responsibility. Through the introduction of the AI RMF Assessment and the provision of free AI RMF policy templates, Coalfire is not only addressing the immediate needs of organizations but is also shaping the future of AI by promoting a culture of security, transparency, and ethical AI practices.
Contact Coalfire to receive free AI RMF templates and schedule an AI RMF assessment.
*** This is a Security Bloggers Network syndicated blog from The Coalfire Blog authored by The Coalfire Blog. Read the original post at: https://www.coalfire.com/the-coalfire-blog/the-federal-push-for-responsible-ai-adoption?feed=blogs