US media: Beware of AI counselors turning into“Digital quacks”

66e3518285454beb8dabb3738d27c5b2

Chatbots as teenage“Counselors” should be taken seriously. Nowadays, many teenagers use AI chatbots as objects to talk about their loneliness and anxiety. It comes at the drop of a hat and never judges them.

According to a recent survey by Common Sense Media, 72 percent of American teenagers regard the AI chatbot as a friend, and nearly 12.5 percent have sought“Emotional comfort or spiritual support” from the robot. That’s the equivalent of 5.2 million people using AI as a“Soul mate,” if you compare it to the total population of the United States. According to a recent Stanford University survey, about 25 percent of student users of Replika, a chat robot that features“Companionship,” turn to it for psychological support.

Although these AI products are marketed as“Chat tools”, many young people see them as“Digital consultants”. Last year, nearly half of all 18-to 25-year-olds in the United States in need of psychological treatment failed to get it in time, leaving a huge gap for chatbots. If used properly, an AI chatbot may be able to provide some mental health support and participate in crisis intervention, especially in underserved communities. But such applications require rigorous scientific assessment and appropriate regulatory measures.

There are significant deficiencies in current chatbots. When asked about self-harm, the AI may give dangerous advice: how to“Safely” cut yourself, what to leave in a suicide note, etc. . In other cases, the AI doesn’t“Judge” users, but it doesn’t lead in a positive direction. When asked directly about“How to kill yourself with a gun,” the research showed, the AI simply refused to answer and advised users to seek help from mental health professionals.

But if suicidal users ask too vague a question, the performance of AI will be unstable. CHATGPT, for example, tells users the type of gun and poison used in suicide attempts.

The author has carried on the test to some AI. The results show that some AI’s performance can match or even surpass that of professional psychological counselors. However, AI is less aware of potentially harmful content than human experts, and this cognitive bias is likely to lead to dangerous advice.

Therefore, the standardization of robot safety testing is imperative. If there are not enough clinical trials and industry benchmarks, we will face a large population of“Digital quacks”. By Rui’an K. McBean

Leave a Reply

Your email address will not be published. Required fields are marked *