AI News Bureau
Written by: CDO Magazine Bureau
Updated 1:13 PM UTC, February 6, 2026

The Cyberspace Administration of China has released draft regulations seeking to strengthen oversight of artificial intelligence services that simulate human personalities and engage users emotionally, signaling Beijing’s intent to impose stricter safety and ethical guardrails on consumer-facing AI.
Issued for public consultation, the proposed rules would apply to AI products and services available in China that present human-like traits — such as personality, thinking patterns, and communication styles — and interact with users through text, images, audio, video, or other formats. The move reflects growing concern among regulators about the psychological and social impact of increasingly immersive AI systems.
Under the draft framework, AI service providers would be required to warn users against excessive use and step in when signs of addiction emerge. Companies would also be expected to take responsibility for safety across the entire product lifecycle, including setting up systems for algorithm reviews, safeguarding data security, and protecting personal information.
A key focus of the proposal is emotional and psychological risk. Providers would need to assess users’ emotional states and levels of dependence on the service. If users display extreme emotions or addictive behavior, the rules call on companies to take appropriate intervention measures.
The draft also establishes clear content and conduct boundaries. AI systems covered by the rules would be prohibited from generating material that threatens national security, spreads rumors, or promotes violence or obscenity.