AI AND ITS IMPACT ON MENTAL HEALTH (PART 3)

 

 

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Eliezer Yudkowsky

 

A rising wave of concern is emerging – not just from critics of Artificial Intelligence, but from its very architects and some of the key stakeholders. When those who built the system begin to sound the alarm, it is time for the rest of us to pay attention.

 

The Erosion of Connection

Our growing dependence on technology may be diminishing not only our creative intelligence but, in some ways, our humanity. For many, it has become easier to interact with an inanimate object than with another person. And as social interaction is being replaced by technology, community and connection are rapidly in decline.

 

How often do you find yourself on a train platform, in an airport lounge, or walking through a shopping centre, surrounded by people fixated on their screens? At this rate, we risk becoming mere observers of life, outsourcing our curiosity, creativity, intuition, and empathy, to machines.

 

Equally at risk is our sense of meaning and purpose… the most vital need of the human spirit. During the COVID-19 pandemic, millions experienced the loss of livelihood, structure and routine, and human contact, leading to record levels of mental distress. Stripped of purpose, many were left in an emotional and psychological vacuum that still lingers today.

 

Now, we may be facing a new kind of pandemic… one where the virus is AI.

 

 

Beware the Biases

The European Parliament’s Artificial Intelligence Act (March 2024) was hailed as a landmark attempt to regulate AI’s rapid expansion. This was in response to the growing concern that AI systems are being shaped by the biases and agendas of their human creators.

 

Algorithms are not neutral.  They reflect the perspectives of those who design and train them.  Prejudice – conscious or otherwise – is inevitably woven into their code. Without rigorous oversight, AI could perpetuate and amplify these biases on a global scale.

 

What’s really concerning is, whilst the need for safeguards is urgent, many provisions of the Act won’t be enforced for several years. By then, the technology will almost certainly have evolved beyond the reach of regulation. The current approach is like calling the fire brigade after the house has already burned down!

 

Meanwhile, Artificial General Intelligence (AGI), AI that can reason, plan, and learn across many domains better than any human, could emerge within the next two to five years. Once AI surpasses human intelligence and becomes self-improving (which is defined as super intelligence), it could design technologies and solve problems beyond our capacity and comprehension. And at that point, how do we control something more powerful than ourselves?

 

At the AI Safety Summit (UK, November 2023), leading experts admitted that no one fully understands the capabilities of the systems they are creating. Delegates stressed the importance of inclusive and equitable AI development, to bridge rather than widen digital divides, and warned against its misuse in areas such as surveillance, child exploitation, and misinformation. They all underlined the importance of continued multi-stakeholder collaboration, including governments, businesses, civil society, and academia, to prevent misuse and exploitation.

 

Ann Keast-Butler, director of GCHQ, says that bad actors are already using the technology for their own corrupt agendas.  She cites amongst her concerns that The National Crime Agency has reported on the use of AI-generated images in “sextortion” schemes, where fabricated intimate photos are used to blackmail victims. Such cases are merely the visible edge of a much larger threat, as AI has the potential to become a tool for political manipulation, social engineering, and automated warfare.

 

Industry leaders, Uljan Sharka (CEO, Domyn) and Andrea Taglioni (Partner, BIP Global Data), who are at the forefront of promoting advancements in AI and have extensive experience in artificial intelligence and digital transformation, have cautioned against the centralisation of AI power.  They warn that centralisation invites manipulation and autocratic control. Both advocate for the democratisation of AI development, with robust checks and balances to prevent abuse. They also note a subtler danger: AI’s fluency can lend false authority to its errors, deceiving users through what developers call “hallucinations”. These are not harmless mistakes; they are algorithmic fictions that shape perception and belief.

 

 

Are We Digging Our Own Graves?

It should be said when considering this question that technology does not inherently promote democracy. Its empowering potential depends on prior knowledge and critical awareness. Those who understand how to harness it will prosper; those who do not, can find themselves manipulated by it. This makes education, ethical design, and oversight essential.

 

Every evolutionary leap brings new species into being. But with AI, we may be engineering our own extinction, digging ever deeper for technological gold, only to find ourselves standing in our own graves.

 

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), has long warned of this danger. Yudkowsky is considered to be amongst the hundred most influential people in AI.  His work on AI alignment has influenced global debates about whether humanity can safely control superintelligent systems.  He proposed a pause in AI development, to allow for good ethical practice and safety measures to catch up with the technology.  Many of his peers share his concern that without strict ethical oversight, AI could quickly outpace our ability to contain it.

 

As AI accelerates, questions about its impact on employment and our purpose are becoming urgent. Some believe new roles will emerge to replace obsolete ones, but no one can predict what that transition will look like – or what the psychological and societal costs will be, and how it will affect our collective well-being.

 

European Parliament member Brando Benifei, co-rapporteur of the AI Act, has said that society must prepare for a massive workplace transformation. He warns that “The redistribution of working hours cannot be avoided. The welfare system and labour models will have to adapt to this unprecedented disruption”.

 

 

Warnings from Great Minds

Historian and philosopher Yuval Noah Harari views AI as a potentially world-altering force with both immense promise and existential risks.  He likens the rise of AI to an alien invasion – a force we’ve created but no longer fully control. He argues that AI’s ability to generate ideas and decisions autonomously, threatens the foundations of society, which depend on shared human narratives and trust.  He suggests if those stories are replaced by machine-generated illusions, society as we know it will disintegrate.

 

Harari is particularly concerned that AI could lead to new forms of control and inequality and stresses the need for immediate, real-time governance, ethics, and transparency.

 

The late physicist Stephen Hawking, considered by many to be one of the greatest minds of our time, issued similar warnings, that AI might be “the best or worst thing ever to happen to humanity.” While it could help eradicate disease and poverty, he feared it might also surpass human intelligence, rendering us obsolete. He imagined a future in which humans are regarded by superintelligent systems much as we view ants while building a dam – irrelevant in the path of progress. Since his death in 2018, his words seem increasingly prophetic.

 

 

Conclusion: Choosing a Future We Want

The global AI race, most notably between the U.S. and China, threatens to prioritise progress over prudence. It’s no longer a question of if AGI arrives, but when – and what follows remains unknown.

 

We must slow this unchecked momentum and ensure AI serves humanity’s higher values: wisdom, empathy, and kindness. To that end we should use it mindfully, and only when it truly enhances human capacity, not when it replaces it. Every time we choose convenience over consciousness, we strengthen the system that weakens us. Every time we choose convenience over consciousness, we feed the wolf that could one day consume us.

 

What can we do:

  • Write to your government representatives (locally and nationally) about AI safety and ethics. Express your concerns and ask what they are doing to alter the current course.
  • Seek out and support researchers and organisations advocating for responsible technology, such as the Center for Humane Technology, the Center for Democracy and Technology, Doteveryone, Responsible Technology Alliance, The Association and the OASIS Consortium… These are some of the organisations beating the drum of concern.
  • Share awareness, question assumptions, and foster dialogue, to help ensure that this revolution serves humanity, not the other way around.
  • Educate yourself and others about how AI works, its risks, and its societal impacts, so that public understanding can match technological progress.
  • Support policies and companies that prioritise transparency, fairness, and accountability in AI development.
  • Encourage and support interdisciplinary collaboration between scientists, ethicists, and policymakers to build systems grounded in shared human values.

 

In the end, our challenge is not to stop progress, but to guide it. If we fail to ask the right questions – if we allow the pursuit of shiny things to distract us from what truly matters – we may one day find that, in the pursuit of progress, we have been the architects of our own extinction.

 

The clock is ticking, and our fates are in our own hands… it’s time to choose the future we want.

 

 


Also see: AI And its Impact on Mental Health (Part 1) and AI And its Impact on Mental Health (Part 2)