Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Researchers Suggest Historical Precedent for Ethical AI Research

The Belmont Report’s guidelines could help avoid repeating past mistakes in AI-related human subjects research.

  • A research paper suggests that a watershed report on ethical treatment of human subjects would translate well as a basis for ethical research in AI.
  • This 1979 work, the Belmont Report, has its findings codified in federal regulations, which apply to government-funded research.
  • Applying the Belmont Report’s principles to human subjects in AI research could bring us closer to trustworthy and responsible use of AI.
A person reaches out to a transparent screen with a circle of icons around the words "AI ETHICS."
Credit: 3rdtimeluckystudio/Shutterstock

If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles? 

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine, a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI. 

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”

The Belmont Report arose from an effort to respond to unethical research studies, such as the Tuskegee syphilis study, involving human subjects. In 1974, the U.S. created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and it identified the basic ethical principles for protecting people in research studies. A U.S. federal regulation later codified these principles in 1991’s Common Rule, which requires that researchers get informed consent from research participants. Adopted by many federal departments and agencies, the Common Rule was revised in 2017 to take into account changes and developments in research.  

There is a limitation to the Belmont Report and Common Rule, though: The regulations that require application of the Belmont Report’s principles apply only to government research. Industry, however, is not bound by them.  

The NIST authors are suggesting that the concepts be applied more broadly to all research that includes human subjects. Databases used to train AI can hold information scraped from the web, but the people who are the source of this data may not have consented to its use — a violation of the “respect for persons” principle.  

“For the private sector, it is a choice whether or not to adopt ethical review principles,” Greene said. 

While the Belmont Report was largely concerned with inappropriate inclusion of certain individuals, the NIST authors mention that a major concern with AI research is inappropriate exclusion, which can create bias in a dataset against certain demographics. Past research has shown that face recognition algorithms trained primarily on one demographic will be less capable of distinguishing individuals in other demographics.

Applying the report’s three principles to AI research could be fairly straightforward, the authors suggest. Respect for persons would require subjects to provide informed consent for what happens to them and their data, while beneficence would imply that studies be designed to minimize risk to participants. Justice would require that subjects be selected fairly, with a mind to avoiding inappropriate exclusion. 

Greene said the paper is best seen as a starting point for a discussion about AI and our data, one that will help companies and the people who use their products alike. 

“We’re not advocating more government regulation. We’re advocating thoughtfulness,” she said. “We should do this because it’s the right thing to do.”


Paper: K.K. Greene, M.F. Theofanos, C. Watson, A. Andrews and E. Barron. Avoiding Past Mistakes in Unethical Human Subjects Research: Moving from AI Principles to Practice. Computer. February 2024. DOI: 10.1109/MC.2023.3327653

Released February 15, 2024, Updated February 20, 2024