President’s Executive Order Seeks to Ensure Safety of AI

A recent executive order directs federal agencies and companies to take steps to promote security and prevent harm from the growing use of artificial intelligence systems.

By Jay Landers, SWE Contributor

CREDIT: miakievy

“Extraordinary potential for both promise and peril.”

This is how artificial intelligence, known as AI, is described in the executive order on AI issued in October by President Joe Biden. Looking to foster the promise of AI and minimize its perceived peril, the executive order details a wide array of actions to be taken by federal agencies and imposes new requirements on companies working in the AI field. The U.S. Department of Commerce is charged with overseeing the initiatives and objectives outlined in the executive order.

Released by the White House on October 30, the executive order “establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more,” according to a White House fact sheet.

The order defines AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” AI systems “use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action,” the order states.

“With the executive order, the White House intends for federal agencies to take the lead when it comes to identifying and rooting out biases inherent in AI.”

– Mrinal Karvir, senior AI software engineering manager for Intel Corp.

In a statement from the Department of Commerce, Kathi Vidal, the undersecretary of commerce for intellectual property and director of the U.S. Patent and Trademark Office, explained that artificial intelligence offers enormous potential for the country, including promoting inclusive innovation and entrepreneurship and solving global issues at speed and scale. “But we must move thoughtfully and carefully to mitigate risk and to ensure AI adoption is responsible,” Vidal said. “The president’s executive order strikes that balance, ensuring we incentivize innovation in critical and emerging technologies, including through appropriate intellectual property protections for inventors and creators, while also advancing national security and AI safety.”

Potential benefits and risks

In a Nov. 1 speech at the U.S. Embassy in London, Vice President Kamala Harris outlined some of the potential benefits and risks associated with these new technologies. “AI has the potential to do profound good to develop powerful new medicines to treat and even cure the diseases that have for generations plagued humanity, to dramatically improve agricultural production to help address global food insecurity, and to save countless lives in the fight against the climate crisis,” Harris said.

At the same time, AI confers certain “existential threats” that “could endanger the very existence of humanity,” Harris said. She noted the possibility of “AI-enabled cyberattacks at a scale beyond anything we have seen before [and] AI-formulated bioweapons that could endanger the lives of millions.” The rise of AI-enabled misinformation and disinformation also threatens to undermine democracies, Harris said.

Meanwhile, the misuse of AI systems is currently causing harm to individuals on a more personal, direct basis, Harris said. For example, a “faulty AI algorithm” can cause a senior citizen to lose health insurance, and women may be “threatened by an abusive partner with explicit, deepfake photographs,” she said, referring to the process of altering photographs so they appear to show someone other than the original subject.

“We must manage all these dangers to make sure that AI is truly safe,” Harris said.

New requirements

Among its many requirements, the 63-page order directs the Department of Commerce, through its National Institute of Standards and Technology, or NIST, to establish “guidelines and best practices with the aim of promoting consensus industry standards for developing and deploying safe, secure, and trustworthy AI systems.” NIST was given nine months to complete this task.

According to the executive order, the Department of Commerce was required to prepare a report within eight months of the order’s release pinpointing existing ways to identify and label synthetic content produced by AI systems and establishing the “authenticity and provenance of digital content, both synthetic and not synthetic, produced by the federal government or on its behalf.”

“When it comes to artificial intelligence, there is enormous potential for our country, for supporting inclusive and equitable innovation and entrepreneurship, and for solving world problems at speed and scale.”

– Kathi Vidal, undersecretary of commerce for intellectual property and director of the U.S. Patent and Trademark Office

All federal agencies having regulatory authority over critical infrastructure must assess “potential risks related to the use of AI in critical infrastructure sectors,” the order noted. To be conducted within 90 days and then annually thereafter, these assessments must evaluate “ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks” and “consider ways to mitigate these vulnerabilities.”

The order invokes the Defense Production Act, the 1950 law that authorizes the president to mobilize domestic industry to help ensure national defense. Under the aegis of the act, the Department of Commerce, within 90 days, must implement measures “to ensure and verify the continuous availability of safe, reliable, and effective AI.”

To this end, the Department of Commerce is requiring companies developing AI systems to report the steps they are taking to test their models and protect them from theft.

Addressing bias in AI

The executive order is a great starting point for ensuring the safe development and use of AI, said Mrinal Karvir, the senior AI software engineering manager for Intel Corp., who teaches a course on the responsible use of AI in SWE’s Advance Learning Center. However, in terms of addressing bias within AI, Karvir said, the order could go further. “I don’t see anything very specific to companies that are developing the technology in terms of bias.”

Because they reflect the data used to train them, AI systems can manifest, and sometimes amplify, biases present in society, including gender- and race-based biases. With the executive order, the White House intends for federal agencies to take the lead when it comes to identifying and rooting out biases inherent in AI, Karvir said. “I think the approach they’re taking is to first implement in-house and [then] role model those best practices.”

However, the absence of a common framework among the agencies could hamper efforts to address bias in AI, Karvir said. “Each agency or each organization will have their own way of addressing the bias, so there might be inconsistencies.”

Despite this possible limitation, the order represents progress overall, Karvir said. “It’s definitely a very positive step forward.”