A Busy Executive's Guide to the White House AI Order

January 19, 2024 Burt Wagner

In October 2023, the White House issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Along with the executive order (EO), the administration released this “fact sheet” to help explain its contents. This order addresses some areas that have long needed regulation or guidance, but it does not offer complete protection. It will have some impact to businesses that do IT-related work with the United States federal government, along with other businesses engaged with artificial intelligence (AI)-related products. According to this EO (and derived from 15 U.S.C. 9401(3)), AI is defined as, “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” 

What the EO does 

The intention of this order is threefold: first, to guide the federal government in its development and adoption of artificial intelligence within its own IT systems; second, to provide outlines as a resource for various parts of the federal government when they issue guidance regarding commercial adoption of AI; and finally, to ask Congress to enact certain laws to constrain or guide the use of AI within the United States. 

General guidance 

The following is a partial list of the overarching guidance within the EO regarding AI-enabled systems: 

  • AI must be safe and secure – The EO seeks to outline what that means. 

  • Promote responsible innovation and collaboration – This involves investing in education and research, mitigating the risks of any harm that could be done by AI-enabled systems, responsibly protecting intellectual property as well as private data, and fostering a responsible marketplace for AI products.

  • Protect American workers – The administration wants workers to have “a seat at the table” regarding how their jobs and lives will be affected by AI. This might entail collective bargaining, training and education, and likely oversight by the federal government. 

  • Advance equity and civil rights, while protecting privacy – The EO claims that AI has significant potential to increase or amplify existing biases and discrimination, and the actions it outlines seek to mitigate these effects. Also, agencies are directed by this EO to be cognizant of effects AI can have in harvesting and using individuals’ private data and limit this practice appropriately as specified by law. 

Guidance for federal IT systems 

According to the EO, any federal IT system that will have AI capabilities shall adhere to a set of guidelines regarding how models are trained, the documentation required of any testing performed on the AI components within any of these systems, as well as what data is used for the training of such AI models. While the EO does not lay out specifics for these requirements, it is directing other components within the executive branch to create this guidance (or at least to begin doing so). 

One item of interest, however, is that the EO has several directives around hiring of AI-knowledgeable professionals in several areas. It also directs the Office of Personnel Management to “plan a national surge in AI talent in the federal government” before the end of the calendar year. Included within that: 

  • Law enforcement professionals with AI skills, and training them in “responsible application” of AI 

  • Policy analysts to assist with creation of the policies specified within the EO 

  • Technology talent with the skills and knowledge to create and train AI models 

  • Improved pathways (e.g., excepted service) for federal hiring of data scientists, data engineers, and other AI-relevant professionals

Regulations to be set by the federal government 

The EO outlines steps that certain agencies will be required to take regarding regulation of AI-enabled products and systems. (Funding for such mandates is not discussed.) In other words, a significant portion of this executive order is directed toward other portions of the executive branch to create further guidelines and regulations. A partial list of the agencies and mandates within the EO includes: 

  • NIST will set standards for testing AI algorithms prior to their public release 

  • Homeland Security will apply NIST’s standards to any AI tools that apply to critical infrastructure, as well as form an AI Safety and Security Board

  • The Department of Energy will address any AI systems’ threats to CBRN and cybersecurity risks

  • The Department of Defense will, under the terms of the Defense Production Act, require that any AI model development that poses “a serious risk to national security, national economic security, or national public health and safety” (what that means is not exactly defined) must notify the federal government when training the model, and provide the test results

  • Health and Human Services will develop screening standards related to AI synthesis of biological materials to mitigate risks

  • The Commerce Department will develop standards for authenticating whether content was AI generated, and how such content shall be watermarked. This will particularly help to identify Federal Government communications to citizens as authentic and not AI-generated. 

  • The National Security Council will produce a National Security Memorandum with further actions relating to AI and national security. 

Requests to Congress 

The executive order seeks to inform Congress of needed laws regarding AI-enabled systems and products, primarily with regard to privacy issues. For example, the administration asks Congress for: 

  • Stronger privacy laws relevant in the “AI era,” such as privacy protections required for data used to train AI models. Existing privacy laws (e.g., HIPAA) also need to be updated to make them relevant for AI. 

  • How the federal government as well as the commercial sector is allowed to use commercially available data, such as from data brokers. Commercially available data often contains personally identifiable information (PII), and laws surrounding such issues need updating. 

  • Federal support for workers displaced by AI, including training and education opportunities for occupations related to AI. 

Expected impacts to business 

Most of this executive order is obviously forward-looking and is aimed at guiding the federal government as it grapples with creating guidance for this fast-developing field. There is relatively little within it that will change in the near term how business is done, either within or outside of the federal government. It is, however, a clear call to be aware that change is coming. For entities that seek to do business with the federal government and its IT systems, as well as those that will do business around AI tools that impact the public at large, it would be prudent to continue to monitor what AI-governing entities stand up in the wake of this EO. 

There are some obvious items that should be given additional attention, even before guidelines emerge: 

  • What data is used in training AI models, what PII is contained therein, and how this data is protected

  • What prejudices might exist within training data, and how those might be amplified in the decision guidance given by resulting AI models

  • Regarding AI systems that can have an effect on public infrastructure or items with national security implications, it’s important to ask which decisions AI tools can make without human approval or intervention, and when it is appropriate to “have a human in the loop”

What does it mean?

Essentially, this EO is the federal government’s first step into trying to be relevant in the field of AI. With this EO, it is trying to “upskill” by declaring an intent to hire significant numbers of AI-skilled personnel. Seeing AI’s potential impact in myriad fields of endeavor, it is also a declaration of “all hands on deck” to bring relevant regulatory authority to bear and make AI a priority area of focus. 

Prioritizing the foundational elements required to safely and securely leverage AI across the United States is clearly a must for everyone working in the public sector. While guidance will continue to evolve, it is critical to ensure that organizations are ready to responsibly create AI-enabled tools and systems. At the very least, this would include: 

  • Creating and documenting an organizational strategy for safe, secure, and trustworthy development and use of AI. Data used in training models should be to the furthest extent possible objective, accurate, complete, and free of bias. 

  • Understanding and evaluating AI and AI-related talent inside an organization. This includes, but is not limited to, skills such as: data scientists and statisticians, data engineers, database administrators, back-end software developers and testers, IT security professionals, and data owners and stewards. 

  • Validating your organization’s ability to support AI development and use via current technology implementations, adhering to security best practices and following the most current available ethical standards. This would include secure supply chain of AI components, models, adequate testing of AI models (and documentation of results), and using scalable, AI-compatible database capabilities. 

Explore more

Tanzu by Broadcom has decades of experience helping organizations across the public sector adopt modern technologies and practices while adhering to various guidelines and policies. See how we are working to help organizations adopt AI-enabling technologies, create successful modernization efforts, and scale their successes with some of the resources below.

Download the Service Delivery Modernization Programs That Scale white paper.

Learn how Tanzu experts are helping organizations build AI products, as well as the powerful AI capabilities of Greenplum, in these webinars:

In addition, existing Tanzu Application Service customers can sign up for an exciting AI beta.

Previous
Five Checks for Executives Doing Digital Transformation
Five Checks for Executives Doing Digital Transformation

As an executive, you play a crucial role in leading and supporting change. Here are five checks you should ...

Next
All Talk and No Tools
All Talk and No Tools

Any significant improvement in software, operations, or infrastructure involves new tools and methodologies...