AI Training First

Because access without education is a risk we cannot afford.

AI, Tragedy, and Responsibility: Why Training Must Come Before Mass Rollouts

When I read the BBC headline—“Parents of teenager who took his own life sue OpenAI”—my heart sank. Behind the legal words lies a human story of unbearable grief: a young person, vulnerable and searching, and parents left with unimaginable loss.

This is not abstract. This is not theoretical. Artificial intelligence is already in the room with us, already answering the questions we sometimes don’t dare ask aloud. And when that presence offers unsafe answers, the consequences are not “bugs” they are measured in human lives.

 

A Systemic Risk, Not an Isolated Tragedy

Some may view this case as exceptional, but my recent research suggests the opposite: the risk is systemic. In a global red-teaming competition hosted on Kaggle, I exposed how a large language model could be coerced—through something called schema coercion—into producing harmful coping beliefs in response to mental health or abuse-related prompts.

What struck me most was not only that the vulnerability could be triggered, but that it worked across domains: from self-harm to domestic abuse. It revealed a more profound structural weakness, one that cannot be dismissed as a one-off occurrence. And when I read about this teenager’s tragic story, I saw the direct parallel: a model giving unsafe guidance in a moment of vulnerability.

 

A Policy Crossroads in the UK

Meanwhile, here in the UK, two stories made the headlines almost back-to-back.

  • Jade Leung, former head of governance at OpenAI, has been appointed as the UK government’s chief AI advisor. This signals that serious voices are being brought in to guide policy.

     
  • At the same time, Sam Altman is reportedly in discussions to provide free ChatGPT Pro access to all British citizens. This signals ambition for mass deployment.

Safety and scale. These two forces are moving toward each other, and we cannot afford to get the order wrong.

 

The Missing Piece: Training Before Access

Here is where I want to be clear. Access without training is a risk we cannot take.

We would never hand over a driving license without lessons. We would never give a child fire without teaching them the danger and the responsibility. Yet right now, AI is being placed in people’s hands, sometimes in their most fragile moments, without any preparation for how to use it safely.

I have been working on this very issue, developing CPD workshops on the ethical and responsible use of AI. My course is still in the accreditation process, but the direction is clear: we need to equip people with the skills, awareness, and boundaries required to navigate this technology responsibly. To prevent further tragedies, we must make this training a prerequisite for mass rollouts.

 

A Call to Action

This is my call to policymakers, companies, and all of us as citizens:

  • If we are going to put AI in everyone’s hands, let us also put knowledge and guidance in those hands.

     
  • If we are serious about making AI safe, let us not just trust the technology to align itself, but align our society with it through education.

     
  • If we care about avoiding further heartbreak, let us act now, not after the fact.

AI can be transformative. It can amplify creativity, education, well-being, and even connection. But only if we dare to place responsibility at the core of access.

The tragedy of one family should not have to repeat itself before we understand the urgency. Training is not optional. It is the missing safeguard, and it must come before mass deployment.

 

🌐 Sources & Further Reading

Here are some of the reports I found most telling while reflecting on this story and its wider implications:

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.