In the absence of stronger federal regulation, some states have begun regulating apps that offer AI “therapy” as more people turn to artificial intelligence for mental health advice. But the laws, all passed this year, don’t fully address the fast-changing landscape of AI software development. And app developers, policymakers and mental health advocates say the resulting patchwork of state laws isn’t enough to protect users or hold the creators of harmful technology accountable. “The reality is millions of people are using these tools and they’re not going back,” said Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick. The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users’ health information and to clearly disclose that the chatbot isn’t human. Pennsylvania, New Jersey and California are also considering ways to regulate AI therapy. The impact on users varies. Some apps have blocked access in states with bans. Others say they’re making no changes as they wait for more legal clarity. And many of the laws don’t cover generic chatbots like ChatGPT, which are not explicitly marketed for therapy but are used by an untold number of people for it. Those bots have attracted lawsuits in horrific instances where users lost their grip on reality or took their own lives after interacting with them. Vaile Wright, who oversees health care innovation at the American Psychological Association, agreed that the apps could fill a need, noting a nationwide shortage of mental health providers, high costs for care and uneven access for insured patients. Mental health chatbots that are rooted in science, created with expert input and monitored by humans could change the landscape, Wright said. “This could be something that helps people before they get to crisis,” she said. “That’s not what’s on the commercial market currently.” That’s why federal regulation and oversight is needed, she said. Earlier this month, the Federal Trade Commission announced it was opening inquiries into seven AI chatbot companies — including the parent companies of Instagram and Facebook, Google, ChatGPT, Grok (the chatbot on X), Character.AI and Snapchat — on how they “measure, test and monitor potentially negative impacts of this technology on children and teens.” And the Food and Drug Administration is convening an advisory committee Nov. 6 to review generative AI-enabled mental health devices. Federal agencies could consider restrictions on how chatbots are marketed, limit addictive practices, require disclosures to users that they are not medical providers, require companies to track and report suicidal thoughts, and offer legal protections for people who report bad practices by companies, Wright said. Not all apps have blocked access From “companion apps” to “AI therapists” to “mental wellness” apps, AI’s use in mental health care is varied and hard to define, let alone write laws around. That has led to different regulatory approaches. Some states, for example, take aim at companion apps that are designed just for friendship, but don’t wade into mental health care. The laws in Illinois and Nevada ban products that claim to provide mental health treatment outright, threatening fines up to $10,000 in Illinois and $15,000 in Nevada. But even a single app can be tough to categorize. Earkick’s Stephan said there is still a lot that is “very muddy” about Illinois’ law, for example, and the company has not limited access there. Stephan […] | Read More The Yeshiva World
0 Comments