US Army to Release Guidelines on GenAI Use

US Army to Release Guidelines on GenAI Use

The upcoming guidelines will address security concerns associated with LLM.

The U.S. Army's Chief Information Officer revealed that the department is nearing the release of a new directive aimed at guiding the utilization of generative AI, particularly focusing on large language models (LLMs).

“We continue to see the demand signal. And though there is lots of immaturity in this space, we’re working through what that looks like from a cyber perspective and how we’re going to treat that. So we’re gonna have some initial policy coming out,” Army CIO Leo Garciga reportedly said during a webinar hosted by AFCEA NOVA.

Garciga highlights the security challenges linked with LLMs that require attention and emphasized that the forthcoming guidelines from the Army will specifically tackle these concerns.

Earlier this year, the U.S. Department of Defense (DoD) Chief Digital and Artificial Intelligence Office (CDAO) initiated the initial phase of the AI Bias Bounty program, a crowdsourced initiative designed to identify bias in AI systems. 

“The goal of the first bounty exercise is specifically to identify unknown areas of risk in Large Language Models (LLMs), beginning with open source chatbots, so this work can support the thoughtful mitigation and control of such risks,” the DoD said.

Previously, Jude R Sunderbruch, Executive Director of the DOD Cyber Crime Center, while speaking at the Google Defense Forum said that the Defense Department is starting to use AI, but so are its adversaries.

Deploying an AI technology to counter another AI is very much a likely scenario in future warfare.

“Adversaries are trying to get past our boundaries and our securities every day. They're moving at 'lightspeed.' They're on fiber optic networks. They're able to bounce from one virtual private server to another in an instant, so utilizing AI to try to get ahead of that is going to be essential," Sunderbruch said.

CDO Magazine
www.cdomagazine.tech