When AI Actually Saves Lives
First and foremost, this entry is meant to recognise the effort and extraordinary work behind STOP: a research project that studies mental health issues on social media through Artificial Intelligence. You can find its story here: https://stop-project.github.io
I didn’t plan to write this post. I just went to an event thinking it would be another day of enterprise AI, slides and buzzwords.
Instead, I walked out thinking: “Ok, this is why I chose health tech.”
The talk was by Ana Freire She’s a researcher at UPF who decided to put AI at the service of one of the hardest topics we avoid: suicidal ideation and mental health on social media.
It started with a post on Facebook. A girl announced she was going to end her life. Her followers tried to help, the police arrived late, and she died.
Scrolling back through her timeline, Ana found what many of us have seen but don’t always want to face: posts like “I don’t accept the image I see in the mirror” or “What a pity I woke up alive today.”
As humans, we often look away. But AI doesn’t look away. It looks for patterns.
From there, she and her team built something that still blows my mind:
- An AI system that detects signals of suicidal ideation, depression and eating disorders in social media content (text, images, behaviour).
- They had to fight through GDPR, ethics committees and “this can’t be done” until they found the right way: full anonymisation, translation, feature extraction… turning raw posts into legally usable data.
- Their models reached ~85% accuracy in detecting suicidal ideation, roughly the same as a group of psychiatrists.
But the most powerful part wasn’t the model. It was what came after.
They used these insights to design ad campaigns on Instagram, Facebook and TikTok targeted to people at risk, not to sell anything, but to show:
Phone lines, emotional support chats, 24/7 help on the other side of the screen.
Some results Ana shared
- A 60% increase in calls to a suicide prevention helpline after the first campaign.
- Up to 10x more conversations in emotional support chats when campaigns were active.
- Multiple emergency interventions and real lives saved because someone saw an ad at the right moment and reached out.
All from combining: AI, mental health professionals, legal experts, NGOs and… ads managers. Not to optimise a funnel, but to keep people alive.
A few things stayed with me
“You can’t” often means “you can’t that way.” With the right ethics, creativity and multidisciplinary teams, a “no” can become a better “yes”.
-
Tech is not always the villain. The same platforms that amplify anxiety can also become a lifeline, if we listen and design with intention.
-
Sometimes the most powerful intervention is simple: Ana shared that one major risk factor for young people wasn’t an algorithm, it was this:
-
They eat alone. Having dinner with them, listening, being there… is also “mental health infrastructure”.
-
We talk a lot about AI models, GPUs and benchmarks. Days like this remind me what really matters: using engineering to do now what people need next.
Thank you, Ana, for the courage to go through all the “this is not possible” and turn it into impact And thanks to the organisers for putting social impact at the centre of an AI event.
If you made it this far, maybe today is a good day to do something simple and powerful: send a message, make a call, or sit at the table with someone who shouldn’t be eating alone. Or maybe just join a nonprofit initiative that creates impact instead of scrolling without purpose.
Not as a side project. Just as a contribution to life.
Thanks for reading.
Have a wonderful week ✌🏼