US Federal News Bureau

1979 Belmont Report on Ethical Treatment of Human Subjects Could Influence Modern AI Ethics

NIST researchers advocate broadening the principles of 1979’s watershed Belmont Report to all human subjects research.

avatar

Written by: CDO Magazine Bureau

Updated 7:57 PM UTC, Tue February 20, 2024

post detail image

Researchers from the National Institute of Standards and Technology (NIST) have suggested a report from 1979 on the ethical treatment of human subjects could set a precedent for Ethical AI Research.

“We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence, and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects,” an official NIST update said.

NIST researchers advocate broadening these principles to all human subjects research, noting AI training databases may contain unconsented data, violating “respect for persons.” They stress AI’s bias risks due to demographic exclusion, as seen in the use of facial recognition technology. Applying Belmont principles to AI, they suggest, offers a feasible solution.

The Belmont Report was a result of unethical research, like the Tuskegee syphilis study, which exploited human subjects.

The team’s findings are featured in the February edition of IEEE’s Computer magazine, a peer-reviewed journal. Although the paper represents the authors’ independent research and is not formal NIST guidance, it aligns closely with NIST’s broader mission to foster trustworthy and ethical AI development.

Training AI on ‘good data’

The institute also wants to ensure identity systems leveraging AI are trained on good data, according to Ryan Galluzzo, Digital Identity Program Lead in NIST’s Applied Cybersecurity Division.

The institute wants to continuously test these systems to ensure their effectiveness; however, a significant challenge lies in harnessing robust datasets and testing methodologies across a rapidly expanding range of applications.

Galluzzo points out that a key best practice is testing algorithms in operational scenarios with a representative user population. He suggested that organizations should implement continuous monitoring of their solutions post-deployment and have processes in place to address inadvertent bias or discrimination.

Related Stories

July 16, 2025  |  In Person

Boston Leadership Dinner

Glass House

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About