- (viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:
- (A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;
- (B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;
- (C) reasonable steps to watermark or otherwise label output from generative AI;
- (D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;
- (E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;
- (F) documentation and oversight of procured AI;
- (G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;
- (H) provision of incentives for the continuous improvement of procured AI; and
- (I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and
- (ix) requirements for public reporting on compliance with this guidance.
(c) To track agencies’ AI progress, within 60 days of the issuance of the guidance established in subsection 10.1(b) of this section and updated periodically thereafter, the Director of OMB shall develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal policy on AI. This method should draw on existing related efforts as appropriate and should address, as appropriate and consistent with applicable law, the practices, processes, and capabilities necessary for responsible AI adoption, training, and governance across, at a minimum, the areas of information technology infrastructure, data, workforce, leadership, and risk management.
(d) To assist agencies in implementing the guidance to be established in subsection 10.1(b) of this section:
- (i) within 90 days of the issuance of the guidance, the Secretary of Commerce, acting through the Director of NIST, and in coordination with the Director of OMB and the Director of OSTP, shall develop guidelines, tools, and practices to support implementation of the minimum risk-management practices described in subsection 10.1(b)(iv) of this section; and
- (ii) within 180 days of the issuance of the guidance, the Director of OMB shall develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance described in subsection 10.1(b) of this section and advance the other aims identified in section 7224(d)(1) of the Advancing American AI Act (Public Law 117–263, div. G, title LXXII, subtitle B).
(e) To improve transparency for agencies’ use of AI, the Director of OMB shall, on an annual basis, issue instructions to agencies for the collection, reporting, and publication of agency AI use cases, pursuant to section 7225(a) of the Advancing American AI Act. Through these instructions, the Director shall, as appropriate, expand agencies’ reporting on how they are managing risks from their AI use cases and update or replace the guidance originally established in section 5 of Executive Order 13960.