The National Institute of Standards and Technology (NIST) recently released guidance on how the government sets artificial intelligence technology and ethical standards.
While it does not contain any specific regulations or policies, the guide outlines a number of initiatives that will help the U.S. government promote the responsible use of artificial intelligence and outlines a number of high-level principles that will guide future technical standards.
Federal standards for artificial intelligence must be strict enough to prevent technology from harming humans, NIST said, and flexible enough to encourage innovation and benefit the technology industry. Without better standards for measuring the performance and credibility of ai tools, governments may struggle to achieve this balance.
The NIST guidelines emphasize the need to develop technologies that will help organizations better research and evaluate the quality of ai systems. These tools include standardized testing mechanisms and strong performance indicators that allow governments to better understand systems and determine how to develop effective standards.
The guidelines state that those involved in the development of Artificial Intelligence standards must understand and comply with U.S. government policies and principles, including those that address social and ethical issues, as well as governance and privacy. While it is widely accepted that these questions must be incorporated into ai standards, it is not clear how they should be done, or whether there is sufficient scientific and technological basis to develop them.
NIST says artificial intelligence standards should be developed in the coming years to be flexible enough to adapt to new technologies while minimizing bias and protecting privacy. While some standards will apply to the broader A.I. market, NIST recommends that the government also examine whether specific applications require more targeted standards and regulations.
The timing of ai standards is also important, NIST said, and setting them too early could hinder innovation. But if it comes too late, it will be hard for the industry to voluntarily agree to the standards. As a result, government agencies must often learn from the outside about the state of artificial intelligence and when federal action may be required.
The guidelines recommend that the White House designate members of the national science and technology council to oversee the development of ai standards, and urge agencies to study the approaches technology companies are taking to guide their own ai development efforts.
NIST also recommends that the government invest in research that focuses on understanding the reliability of AI and incorporating these metrics into future standards.