Kirith Entwistle: ‘We must tackle AI-generated intimate image abuse before it gets so much worse’

What would you do if someone shared your most intimate photos online? If someone downloaded your Facebook photos and ran them through AI software and now they’re sharing fake explicit images that look just like you. How would you feel then?

It feels like at any time in my life the phrase “the digital world is changing rapidly” has been true, and the pace of change only seems to be accelerating. As exciting as that pace of change can be, if we’re too slow to respond it will pose serious risks. When it comes to protecting women and girls, I’m worried that we are already failing to keep up.

Last month, I had a really interesting discussion with the End Violence Against Women Coalition. They have a membership of more than 150 specialist women’s organisations, researchers, and experts working on joined-up approaches to end violence against women and girls. I came away from that conversation wanting to have a much bigger conversation. I want to start that on Tuesday (12th November 2024) by holding a debate in the House of Commons.

What we talked about was the very real abuse being suffered every day by people across the country in the form of Non-consensual Intimate Image (NCII) abuse. Now that isn’t new, it’s something we’ve been talking about since everyone started carrying phones around with cameras on them. What is new is our understanding of the scale of the problem and it’s potential to get so much worse.

Let’s use the Revenge Porn Hotline as an example. Since 2015 they’ve had more than 24,000 total direct cases via phone or email. 3,500 cases in this year alone. They also have an online chatbot, where they’ve had 35,000 sessions with victims of NCII abuse. They’ve helped remove 330,000 images from the internet, but they know of 30,000 reported non-consensual intimate images remain online due to issues with laws and international boundaries.

We have serious questions to ask. Are our laws effective enough to punish offenders and protect victims? What is the role of the social media companies in helping to control the spread of NCII, and how well do they actually respond to genuine complaints? Are we aware of the risks posed by AI and what exactly do we do about that?

Only two weeks ago a man from my constituency was sentenced to 18 years in prison for creating child abuse images using AI technology and real pictures of children. People would supply him with photos and pay him to manipulate them into horrendous images of abuse. What happens when AI products become so widespread and easy to use that people don’t need to search the darkest parts of the internet to find the one person willing and able create these images. They can just make their own pictures of anyone they feel like without their consent. It’s not hard to imagine a future where a group of teenage boys start making pictures of all the girls in class “for a laugh” and causing real harm to someone.

I know that parliamentarians are taking an interest in this. I’m a member of the women and equalities select committee, which is speaking to tech platforms Google and Microsoft about NCII at the moment, but I want to create space for more people to get involved in these discussions and help parliament to catch up to the challenges that women and girls are facing already. It’s a difficult conversation to have, but we all need to be a part of it.

Politics.co.uk is the UK’s leading digital-only political website. Subscribe to our daily newsletter for all the latest news and analysis.