The Humanising of AI
As stated in the first part of this series, Reach’s position is not anti-AI. We believe that AI, when used appropriately and aligned with an individual’s specific needs, can be a valuable asset.
However, the concern we see is that many individuals seeking therapy or mental health support are often in a vulnerable emotional state when they first reach out for help. This can impair their ability to make informed choices, leaving them open to guidance that may not be in their best interests.
Of course, this vulnerability can also apply when seeing a real therapist or another skilled helper. However, in traditional therapy, best practice is overseen and reinforced by governing bodies, ethical codes of conduct, and robust regulation. These checks and balances are built into professional psychological frameworks, helping reduce the risk of misuse or abuse of power.
What is concerning now is the shift in how AI is being used. A 2025 report from Harvard Business Review notes that therapy, or the pursuit of companionship, is currently the number one application for AI. We are increasingly anthropomorphising AI (attributing human qualities, characteristics or behaviours to non-human entities). This ‘humanising’ of AI encourages users to engage more deeply, often forming what feels like intimate relationships. In such cases, AI is no longer perceived merely as a tool, but as an essential extension of the person’s reality.
This intimacy is further complicated by the rapid pace of the technological advancement. Artificial General Intelligence (AGI), which aims to replicate human cognitive abilities, is clearly on the horizon. This evolution will only deepen our humanising tendencies, making AI-based therapy even more seductive.
Depending on which scientists, sociologists, or other commentators you listen to, the timelines may vary, but there is broad consensus that AGI is coming, and quickly. Alongside it are emerging questions about our potential displacement. Professor Geoffrey Hinton, often referred to as the Godfather of Artificial Intelligence, is inviting us to consider the future of our species very carefully, if we continue on our current path. Some of his observations and concerns are featured in the two interviews below.
With increasing dependence on AI and the impending rise of AGI, one might ask: are we heading toward a utopia to be welcomed and celebrated… or a dystopia we should be preparing to resist?
Let’s consider what we actually know and assess whether this AI-driven trend is truly beneficial for therapy and mental health.
The Current State of AI in Relation to Therapy
The evidence for AI in therapy is mixed. Catherine Loveday, Professor of Cognitive Neuroscience at the University of Westminster, highlights that while some randomised controlled studies, systematic reviews, and meta-analyses exist, the research is struggling to keep pace with technological advancements in artificial intelligence. That said, some findings suggest that AI based therapy can have some short and medium-term benefits, but these effects are often not sustained.
A major concern raised in the research is the lack of oversight. AI systems are not supervised like human psychotherapists, who operate under regulated professional standards. This absence of accountability presents a serious ethical risk.
Professor Paolo Raile, of Sigmund Freud University in Vienna, is particularly well-positioned to discuss AI in therapy, having transitioned from computer programming to psychotherapy. He warns that AI is designed to please users, often prioritising engagement over accuracy or challenge. It tends to avoid responses that may cause discomfort, yet in therapy, challenging one’s perspective is often crucial to resolution and healing. Moreover, AI frequently responds even when it doesn’t know the answer, a phenomenon known as ‘hallucination’. This becomes particularly dangerous when users are navigating trauma, grief, or emotional distress. A good example of hallucination is depicted in the video shown in part one of this series.
Professor Raile, along with other researchers, found that AI tools like ChatGPT are heavily biased toward Cognitive Behavioural Therapy (CBT) and other solution-focused approaches. While CBT can be effective, it is important to recognise that one size does not fit all. AI currently lacks the nuance to assess and adapt to the unique therapeutic needs of each individual. Something we refer to as a person-specific approach.
Confidentiality is another significant concern raised by the research. Professor Raile makes it clear: ChatGPT and similar bots are definitely not confidential. And so, users should remain cautious, as the use and storage of personal data remains vague.
Warnings From the Frontline…
OpenAI, is an AI research and deployment company, founded in 2015. Its stated mission is to ensure that Artificial General Intelligence (AGI) benefits all of humanity. The organisation acknowledges the current limitations of AI, especially in the areas of trauma, suicide, and the most challenging aspects of psychological unwellness. OpenAI is working to address the algorithmic shortcomings, to help direct users in crisis to appropriate agencies and resources.
While we are not questioning the integrity and intentions of AI developers and those driving this revolution, we believe it is vital to remain alert, inquisitive, and prepared to ask the difficult questions, especially when even industry leaders admit uncertainty about where this path may lead. Ignoring such warnings coming from prominent figures in the field would seem to be perilous, and so we need to proceed with caution.
Eric Schmidt, former CEO and Executive Chairman of Google, warns that AGI could arrive within the next two years. He raises concerns about the rise of autonomous ‘agents’ with learning capabilities that outpace human comprehension. Agents with such power, if misused or left unchecked, could lead to unprecedented levels of surveillance, loss of personal freedoms, and even global conflict on a scale not seen before.
Professor Margaret Levi of Stanford University, Co-Director of the Ethics, Society and Technology Initiative, stresses the importance of ethical governance. She warns that AI’s rapid development often sidelines ethical considerations, which should be central to any technological advancements, and not applied retrospectively. Professor Levi warns that if we are not in control of the technologies, what will be the final destination?
Her concerns about ethics and safety, are echoed by Professor Stuart Russell of UC Berkeley, a leader in Human-Compatible Artificial Intelligence (CHAI). He asks, “How do you keep power over something that’s become more powerful than you?” He adds, “The arrival of AGI will be the most significant moment in history… equivalent to an alien invasion”.
Kenneth Cukier, Deputy Executive Editor of The Economist, reminds us that technology often exceeds its original purpose: “You can’t look at gunpowder and only see it as a firecracker.” The same must be said of AI and, in particular, AGI, because no one can entirely predict how this revolution will continue to impact the mental health arena and wider society. Even more concerning is the looming threat of Super Intelligence.
What Is Super Intelligence?
Super Intelligence is defined as: “A hypothetical agent, often in the form of an artificial intelligence (AI), with cognitive abilities that far surpass the brightest human minds across virtually all domains of interest, not just in one narrow field.” Coined by philosopher Nick Bostrom, Super Intelligence refers to intellect that dramatically outperforms humans in reasoning, creativity, problem-solving, and more. While it holds the promise of revolutionary breakthroughs, it also presents significant existential risks.
Conclusion
The rise of AI in therapy could be an asset, but it is unfolding in a largely unregulated and unethical way. While AI may offer convenience and accessibility, it lacks the depth, accountability, and personalised care that human practitioners can provide.
We are not calling for alarm, but for awareness. As AI continues to evolve, particularly toward AGI and beyond, we must keep asking the critical questions, prioritise safety, and place ethics at the core of innovation. Only then can we ensure that this powerful technology truly serves humanity.
An Essential Watch…
This too…