The Cybersecurity and Infrastructure Security Agency (CISA) has cautioned election officials that bad actors could use generative AI to threaten election infrastructure. In a factsheet released recently, CISA has outlined how the technology can be misused and suggested mitigations to address heightened risks.
“AI capabilities present opportunities for increased productivity, potentially enhancing both election security and election administration. However, these capabilities also carry the increased potential for greater harm, as malicious actors, including foreign nation state actors and cybercriminals, could leverage these same capabilities for nefarious purposes,” the agency warned.
CISA believes while generative AI may not necessarily introduce new risks in the 2024 election cycle, it could amplify existing ones. For example, foreign bad actors can use text-to-video AI tools to create fabricated videos featuring authentic news anchors reporting on fictitious stories, disseminating disinformation as part of a foreign influence operation.
Generative AI does have the potential to create disinformation and the advancement in deepfake technology does pose a significant threat to election campaigns. Tech giant Google, a leader in generative AI technology, has announced plans to restrict the types of election-related queries that its chatbot Bard and search generative experience can respond to.
These limitations are anticipated to be implemented by early 2024, ahead of the US Presidential election. To mitigate such risk, CISA suggests enforcing strong cybersecurity protocols, like Multifactor Authentication (MFA), especially phishing-resistant MFA like Fast Identity Online (FIDO authentication), and endpoint detection and response software.