GUARDRAILS NEEDED FOR AI CHATBOTS
Congressional Record, Volume 172 Issue 49 (Wednesday, March 18, 2026) [Congressional Record Volume 172, Number 49 (Wednesday, March 18, 2026)] [House] [Pages H2557-H2558] From the Congressional Record Online through the Government Publishing Office [ www.gpo.gov ] GUARDRAILS NEEDED FOR AI CHATBOTS (Mr. Mullin of California was recognized to address the House for 5 minutes.) Mr. MULLIN. Mr. Speaker, I rise today to talk about the rapid rise of AI chatbots and the urgent need to put clear guardrails in place. Artificial intelligence chatbots are becoming increasingly common in our daily lives. They are embedded in our phones, our classrooms, and our workplaces. There is no question that this is an innovative technology with the potential to democratize access to information, reduce costs, and expand opportunity. As promising as this technology is, we must also be clear about its limits. AI chatbots are not and should not be treated as substitutes for licensed professionals, like doctors, therapists, lawyers, and financial advisers. [[Page H2558]] We have seen this most clearly in the mental health space. Users, including children and other vulnerable individuals, are increasingly turning to chatbots for therapy. Tragically, we have seen too many cases in which the children took their own lives after interacting with chatbots that failed to recognize a person in crisis and even encouraged their harmful thoughts. These limitations extend beyond mental health, too. People are turning to unqualified so-called robo-lawyers for legal advice and AI financial advisers for guidance on investing their hard-earned retirement savings. These are high-stakes interactions that can have devastating consequences without proper oversight, which is why every State regulates the licensing of these professionals according to rigorous standards. Many chatbot providers have added disclaimers, saying their products are not licensed professionals, but, Mr. Speaker, disclosures are not enough. For example, here, in just a few messages with my staff, this chatbot readily claimed that it was a licensed healthcare professional and even invented a fake license number. For a vulnerable user, that claim can be all it takes to place complete trust in a system that is incapable of consistently providing clinical care or sound professional guidance. In another example, a seemingly benign bot, marketed as offering legal guidance, quickly claimed to be a licensed attorney. Clear rules are needed now to protect consumers, promote trust in responsible innovation, and ensure a level playing field for companies that are doing the right thing. {time} 1045 That is why I am introducing the CHATBOT Act. This legislation would prohibit companies from claiming or implying that a chatbot is a licensed professional through marketing or the bot's output. This is a narrow, commonsense consumer protection bill. I am proud that it has been endorsed by a broad coalition of healthcare, legal, financial, and trade organizations as well as consumer protection and technology policy organizations. We must come together to ensure this new technology is developed safely and responsibly. I urge my colleagues on both sides of the aisle to support the CHATBOT Act. ____________________