The National Institute of Standards and Technology (NIST) wants to ensure identity systems leveraging artificial intelligence are trained on good data, according to Ryan Galluzzo, Digital Identity Program Lead in NIST’s Applied Cybersecurity Division.
The institute also wants to continuously test these systems to ensure their effectiveness; however, a significant challenge lies in harnessing robust datasets and testing methodologies across a rapidly expanding range of applications.
Galluzzo points out that a key best practice is testing algorithms in operational scenarios with a representative user population. He also suggested that organizations should implement continuous monitoring of their solutions post-deployment and have processes in place to address inadvertent bias or discrimination.
“We’re not going to be able to create innumerable amounts of requirements for all potential applications of AI and machine learning. There’s just too many,” Galluzzo at an event in Washington DC.
Furthermore, Galluzzo also revealed that NIST is in the middle of updating its digital identity guidelines. Earlier this month, researchers at NIST identified ways an AI model could be corrupted and other vulnerabilities that exist in AI systems.
“We are providing an overview of attack techniques and methodologies that consider all types of AI systems. We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses,” said Apostol Vassilev, NIST computer scientist.
The study is part of NIST’s wider initiative to promote the creation of and deployment of trustworthy AI.