AI News Bureau
Written by: CDO Magazine
Updated 5:38 PM UTC, April 15, 2026

Washington has passed new legislation regulating artificial intelligence, requiring greater transparency from companies such as OpenAI and Anthropic in how their chatbot systems operate.
Two bills signed by Governor Bob Ferguson introduce new rules for AI-generated content and conversational AI tools. One measure focuses on misinformation, requiring content that has been significantly altered using generative AI to be traceable through watermarks or metadata. The rule applies to large AI platforms with more than one million monthly users.
“I’m confident I’m not the only Washingtonian who often sees something on my phone and wonders to myself, ‘Is that AI, or is it real?’ And I feel like I’m a reasonably discerning person,” Ferguson said. “It is virtually impossible these days.”
The second law targets AI chatbots designed to simulate human-like interaction. Platforms such as ChatGPT and Claude will be required to clearly inform users that they are not human at the beginning of conversations and periodically during ongoing interactions. The law also prohibits chatbots from presenting themselves as human.
Stricter safeguards apply to minors. Chatbots must provide more frequent disclosures when interacting with users under 18 and are barred from engaging in sexually explicit conversations with them. The legislation also prohibits “manipulative engagement techniques,” including attempts to pressure minors to continue conversations or withhold information from parents.
“AI has incredible potential to transform society,” Ferguson said. “At the same time, of course, there are risks that we must mitigate as a state, especially to young people. So I speak partly as a governor, but also as the father of teenage twins who grapple with this as a lot of parents do every single day.”
The law further requires AI platforms to prevent chatbots from encouraging or providing guidance related to self-harm and to establish systems for identifying such conversations and directing users to mental health resources.