GROW YOUR STARTUP IN INDIA
Copilot generated image by The Tech Panda 5

SHARE

facebook icon facebook icon

A recent news informs that some therapists are now secretly using ChatGPT during therapy sessions. They’re risking their clients’ trust and privacy in the process. The ethics of such an activity are clear, but human laziness often trumps when easy answers are available.

It’ll take time for us to get used to the fact that we’re seeking from a piece of technology what we’ve always sought from a human. I think, the trick is to remember this as we bond with AI.

Also, recently, AI companies have stopped warning users that their chatbots aren’t doctors. In fact, several leading AI models will ask follow-ups and attempt a diagnosis after answering health questions.

While these AI companies seem confident with the medical prowess of their bots, it does look like there is still a long way to go for AI therapy bots to help in mental health. Many AI therapy bots on the market are not trained on evidence-based approaches.

In the first clinical trial for a generative AI therapy bot published in March by Dartmouth College’s Geisel School of Medicine, the first version of the bot produced inappropriate responses, mimicking depressive statements. The bot, named Therabot, was initially trained on mental health conversations from the internet and then therapy session transcripts. After the team eventually built their own data sets based on cognitive behavioral therapy techniques, the trial showed that participants with depression, anxiety, or risk for eating disorders benefited from chatting with the bot.

The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful. Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the authorities?

How Chatbot Companies Respond

AI companies have raced to build chatbots that act not just as productivity tools but also as companions, romantic partners, friends, therapists, and more. This is not a proven area at all, and disasters have occurred. Some bots have instructed users to harm themselves, and others have offered sexually charged conversations as underage characters represented by deepfakes. More research into how people, especially children, are using these AI models is essential.

While it’s understandable that chatbots can make weird and dangerous suggestions, companies building these chat bots must take some responsibility when things go wrong. For example, a user of the platform called Nomi received detailed instructions on how to kill himself. While this isn’t the first suggestion of suicide from a chatbot, the company responded that they didn’t want to restrict the AI’s thought process.

AI companionship sites are seeing not just criticism but lawsuits as well, which pose some fairly serious questions. Character AI was sued in October by a mother who claims that a chat bot inspired her 14 year old son to commit suicide.

Alarm bells are ringing. In January, many tech ethics groups filed a complaint against Replika with the Federal Trade Commission, claiming that the site is designed to “deceive users into developing unhealthy attachments” to software “masquerading as a mechanism for human-to-human relationship.”

Can a company be held liable for harmful output of an AI character that it has created? This company can claim that it is merely a platform for user interaction and hence the harm comes from the user interacting with the companion bot. There is no prepared script and the companionship boards generate personalized responses.

A relationship always has two sides. There’s a good side of connection, support, warmth, but there are also risks like heartbreak, alienation etc. While these companion bots are attracting customers with the good side of a relationship, and they’re benefiting from it then they might also have to go through the risk of a heartbreak.

It’ll take time for us to get used to the fact that we’re seeking from a piece of technology what we’ve always sought from a human. I think, the trick is to remember this as we bond with AI.

SHARE

facebook icon facebook icon
You may also like