Skip to main content
Article Emerging Europe AI for Safer Children

New! How AI is leading the fight against online child abuse

An article from Emerging Europe

 

by Marek Grzegorczyk

The AI for Safer Children initiative – a collaboration between the UN Interregional Crime and Justice Research Institute’s Centre for AI and Robotics and the Ministry of Interior of the UAE – is helping law enforcement agencies tap into the potential of AI. 

The widespread availability of information technologies such as social media and messaging apps has created unprecedented opportunities for communication and collaboration. However, these same technologies have also facilitated the spread of dangerous content such as misinformation, hate speech, and extremist propaganda. 

They have also created a space that facilitates the spread of child sexual abuse materials (CSAM). 



One in five girls and one in thirteen boys globally have been sexually exploited or abused by the age of 18, according to UNICEF, with online interactions featuring in some form in almost all cases of child sexual exploitation and abuse. 

The Covid-19 pandemic accelerated this proliferation, as most of our day-to-day activities moved to cyberspace, exposing an ever-greater number of people to malicious and dangerous content. According to the US National Centre for Missing and Exploited Children, there were almost 30 million reports of child sexual abuse on the internet in 2021.  

Among these cases, one of the particularly worrying trends we observed was in terms of self-generated child sexual abuse material. In 2021 alone, the number of self-generated images or videos increase almost four times (374 per cent) compared to pre-pandemic levels, adding a whole new dimension to the nature of child sexual exploitation and abuse. 

Legislation can only go so far 

Governments and supranational authorities are aware of the problem. The EU recently passed its Digital Services Act, while the UK is in discussions over its Online Safety Bill, and last month the US Supreme Court began proceedings considering amendments to Section 230 of the Communications Decency Act, which protects online platforms from legal liability over content posted by their users.  

In the case of the EU, it is the first significant package of legislation regulating online content in over 20 years.  

However, regulations and legislation can only go so far. Fortunately, while technology has played a role in the dissemination of CSAM, it is also technology – particularly Artificial Intelligence (AI) – which can now play a significant role in assisting regulatory objectives in curbing harmful online content. 

AI-powered tools can help identify and remove CSAM more efficiently and accurately than human moderators alone.  

For example, AI algorithms can analyse image and video content and flag potentially harmful material for further review by human moderators. This can help reduce the workload for human moderators and increase the speed at which harmful content is detected and removed. 

In addition to content moderation, AI can also assist with the tracking and identification of individuals involved in the distribution and consumption of CSAM. By analysing patterns of online behaviour and communication, AI algorithms can help identify and locate individuals who may be involved in the production or dissemination of harmful content.  

AI4SC 

In 2020, the United Nations Interregional Crime and Justice Research Institute’s (UNICRI) Centre for AI and Robotics and the Ministry of Interior of the United Arab Emirates launched the AI for Safer Children (AI4SC) initiative in an effort to tackle child sexual exploitation and abuse online through the exploration of new technological solutions, specifically AI. 

The initiative strives to support law enforcement agencies to tap into the potential of AI and has designed the AI for Safer Children Global Hub – a unique centralised and secure platform for law enforcement agencies – to support them to do so. This flagship solution to a growing problem might also become a template to resolve other problems linked to harmful online content, such as disinformation linked to radicalism and extremism. 

Irakli Beridze is the head of the Centre for Artificial Intelligence and Robotics at UNICRI. He tells Emerging Europe that today, AI and various AI-powered software are at a stage of maturity where the deployment of AI tools by device manufacturers and electronic service providers, including social media companies, should start becoming the norm. 

“In practical terms, this means that regulators and law enforcementsofficials should be seeking to integrate these tools as much as possible to expedite their work, cut through case back logs and address these cases of illegal content quicker,” he says. 

 “That is the only way for anyone to keep up with the increased pace at which this problem is growing and the sophistication of tools, which are behind the proliferation of harmful and illegal content online.” 

An effective and ethical approach 

Lt. Col. Dana Humaid Al Marzouqi, the co-chair of AI4SC, says that the initiative has already learnt a huge amount about the pitfalls associated with the use of AI and the key regulatory and policy requirements to enable an effective and ethical approach, adding that as many as 270 investigators from 72 different countries now form part of the AI4SC community. 

“We have placed the legal and ethical aspects of the use of AI at the fore of our work on the AI for Safer Children initiative,” she tells Emerging Europe. “We promote the responsible development and application of AI in law enforcement. This ethical and legal strategy was shaped in consultation with an expert ethics committee, which focused on aspects such as diversity and inclusion, data protection, and the fair promotion of AI providers.” 

If we are to tap in to the true potential of AI however, Al Marzouqi believes that public confidence in the safe and ethical use of AI is essential for effective implementation. “This can only be achieved through clear policy and regulation,” she suggests. “This in turn presents its own challenges. To harness the full potential of AI we must ensure that regulation does not stifle innovation. A risk-based approach is thought to be the best way to achieve this. It is pragmatic and creates clarity for businesses and drives new investment.” 

UNICRI’s technical knowledge and understanding of both the problem and the underlying technology that needs to leverage, as well as the UN umbrella it affords, has made it perfect fit for this kind of project.  

“It not only lends global coverage as is needed for borderless online crimes such as this, but it also helps ensure that UN principles and values are instilled inherently throughout our work on the AI for Safer Children initiative,” adds Beridze. “Its efforts to advance principles such as diversity and inclusion, human rights and the rule of law, and equality make UNICRI and its Centre for AI and Robotics a truly trustworthy and global facilitator through which we can unite the efforts of a diverse stakeholders.” 

A world of difference for victims 

Al Marzouqi says that while AI is a highly technical topic, in essence she sees it playing a role in in the work of law enforcement agencies in three main ways.   

Firstly, by increasing prevention: natural language processes can identify and automatically flag predatory user behaviour on social media and child-friendly sites, for example, gaming sites.   

Secondly, by increasing detection rates: image recognition can be used to pre-classify child sexual abuse imagery based for human review by automatically detecting age and nudity.   

Thirdly, by facilitating prosecutions: facial, object and voice recognition can help identify victims, perpetrators, and key details in order to link related pieces of evidence together to build as strong a case as possible.  

“Throughout engagement with agencies already using such tools we have seen the true potential of this technology,” she says. “It helps to cut investigative times, reduce case backlogs and get to the victims faster. It all contributes to exponentials improvements for law enforcement agencies in the timely identification and reporting of child sexual abuse materials online.” 

Beridze agrees: “Data processing is exactly what AI excels at – through its ability to help prioritise an overwhelming (and increasing) amount of possible child sexual abuse files according to their likelihood of containing unlawful child sexual abuse material; the ability to take measures based on this classification such as automatically muting audio to safeguard investigators’ wellbeing; and AI can also aid in the crucial step of linking files by recognising similar elements, such as facial or object detection.” 

While this potential all sounds great, can it really help in practice? Beridze says that it already is. 

“Law enforcement officers using the technologies have confirmed to us that AI is already contributing to the fight against online child sexual exploitation and abuse by, for instance, cutting investigation times,” he says. “According to members of our network that have first-hand experience with these tools, the time spent on analysing child abuse images and videos that used to take one to two weeks can now be done in one day. AI tools have also helped with backlogs, significantly cutting forensic backlogs from over 1.5 years down to four to six months.  

“This can make a world of difference for victims.”