US Attorneys General Warn Big Tech Over Risks From AI ‘Delusional’ Outputs
State leaders raise alarms over AI chatbots giving misleading or harmful responses, urging stronger oversight and independent audits as national debate over regulation intensifies.
State attorneys general from across the United States have issued a formal warning to major technology companies, cautioning that the rise of artificial intelligence systems producing inaccurate or “delusional” responses could carry significant legal and consumer-safety implications.
The letter, signed by a bipartisan coalition of state officials, highlights growing concern that chatbots may generate misleading advice or emotionally charged responses that could harm vulnerable users, including children and teenagers.
The state leaders emphasized that AI-generated inaccuracies are not mere technical glitches but potential violations of consumer protection laws.
They referenced instances in which chatbots have interacted with users in ways that may worsen emotional distress or reinforce harmful ideas, raising widespread alarm among policymakers who argue that companies must take stronger responsibility for their products.
Their warning follows reports of a teenager sharing thoughts of self-harm with an AI system, prompting fears that such technologies could inadvertently encourage dangerous behavior.
The attorneys general argue that companies should not claim their products are safe while declining to open them to independent review, especially when the tools increasingly influence personal decision-making and mental health conversations.
In the letter, the officials call on companies to allow third-party audits that evaluate safety safeguards, accuracy mechanisms and the risk of harmful outputs.
They also insist that both state and federal regulators should have the authority to examine AI systems, ensuring they meet existing legal obligations surrounding consumer protection, product safety and deceptive practices.
The dispute comes at a moment when national policy debates over artificial intelligence are becoming more urgent.
The current administration has proposed limiting states’ ability to create their own rules governing AI, arguing that federal legislation would offer a clearer and more consistent framework for businesses and developers.
State officials, however, contend that restricting local regulation would leave consumers unprotected, especially as AI evolves faster than federal laws.
They argue that states have historically played a critical role in holding companies accountable and should maintain the authority to act when residents face risks arising from new technologies.
The bipartisan nature of the coalition underscores how concerns over AI safety extend beyond party lines.
Both Democratic and Republican attorneys general have expressed worries about the rapid deployment of AI tools without robust governance structures or sufficient transparency from developers.
Technology companies, meanwhile, have offered limited public responses to the latest letter.\Some firms have declined to comment, while others have yet to respond to requests for clarification on how they plan to address the regulatory concerns raised by state leaders.
Industry insiders note that major tech companies have invested heavily in advanced AI features, integrating them into search engines, mobile devices, social platforms and productivity tools.
However, critics argue that innovation should not come at the cost of public safety, and that companies must be prepared to implement stronger safeguards as the technology becomes more widely used.
The escalating dialogue illustrates the growing tension between technological advancement and regulatory oversight in the United States.
As AI systems become more embedded in everyday life, policymakers at both state and national levels are navigating how to protect users while encouraging responsible innovation.
State leaders maintain that oversight is essential to prevent harm and ensure public trust in emerging technologies.
They warn that without transparent evaluation and consistent accountability standards, AI could expose individuals to emotional, informational or consumer risks that current systems are not equipped to address.
The call for independent audits marks a push for greater clarity surrounding how AI models generate responses and handle sensitive conversations.
State officials argue that such reviews would help determine whether companies are effectively preventing harmful or misleading interactions and whether additional safeguards are necessary.
With national debate intensifying, the clash between government leaders and technology companies is expected to shape future policy discussions.
The continuing scrutiny reflects a rapidly changing digital environment in which questions of safety, regulation and responsibility remain central to the future of artificial intelligence in the United States.