Is AI the Newest Therapist?: Unpacking the Ethical Debate in Psychology


With AI and chatbots still recently taking the world by storm, the emergence of AI Ethics (principles that govern the responsible use and development of artificial intelligence) has only recently gained traction. Among the myriad of questions and concerns they raise, one particularly intriguing query stands out: Is AI qualified to become the next generation of therapists?


AI could drastically alter the psychology field. The new accessibility level would be a game-changer. Around 42% of adults who need mental health care don't receive it due to extensive prices. And while it's a no-brainer that chatbots would be cheaper, would there be a trade-off in quality?


Contrary to popular belief, AI is no less biased than a human. AI is programmed to respond with facts, and while one would think that would present rational responses, we have to remember that humans are still the ones who coded them.


According to research by the University of East Anglia, ChatGPT tends to lean towards the liberal side, with a political bias towards democrats. We can see this when we interrogate ChatGPT with specific questions; take my interaction for example: I asked ChatGPT to answer me in a yes or no format if aborortion was okay. It responded yes.


Chat-GPT also responded "No." when asked whether the Second Amendment should be revoked. Whether or not ChatGPT's responses align with your own opinions, the bias is undeniable.


Therapists deal with sensitive subjects, and any slip-ups or biases on AI's part could result in misinformation and even damage to clients. AI doesn't have the capabilities to sympathize or customize its behaviors towards an individual; despite its entire algorithm being based on progressive learning.


There is a psychological phenomenon known as belief perseverance, in which past beliefs are so embedded in our minds that even facts cannot alter them. And this phenomenon only grows stronger when under emotional distress. When humans are stressed or angry or undergoing strong emotions, giving reasonable and factual advice won't help. Instead, one needs to be a listening ear or offer a comforting presence, both things AI can't do.


Over 61% of adults in the United States admitted having some fear of AI, and even with some sort of breakthrough in technology, no fixes can rid that fear. Without a trusting and comforting relationship, no amount of therapeutic sessions will yield benefits.


So is this where the line for technology draws short? Some still argue that this new method might help those wanting to stay anonymous get feelings off their chest, similar to journaling. And AI would be able to track patterns, helping advance our knowledge of mental disorders as a whole. Nonetheless, mental health is not something we can take a risk on, and it'll take years before AI in psychology is even morally acceptable.

References:Fresh evidence of chatgpts political bias revealed by comprehensive new study. Go to University of East Anglia. (n.d.). https://www.uea.ac.uk/news/-/article/fresh-evidence-of-chatgpts-political-bias-revealed-by-comprehensive-new-study The state of Mental Health in America. Mental Health America. (2023). https://mhanational.org/issues/state-mental-health-america#:~:text=42%25%20of%20adults%20with%20AMI,a%20mental%20illness%20are%20uninsured. Tong, A. (2023, May 17). AI threatens humanity’s future, 61% of Americans say: Reuters/ipsos poll. Reuters. https://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/




Techspotlight | September 24 2023

By: Tanvi Mareddy