Hungry for data: Inside Europol’s secretive AI program
Europol’s growing appetite for data and artificial intelligence is transforming European policing, largely out of public view, and without clear oversight.
Europol’s growing appetite for data and artificial intelligence is transforming European policing, largely out of public view, and without clear oversight.
In The Hague, Europe’s police agency is steadily pursuing an ambitious goal: to amass vast troves of data and train algorithms that could transform policing across the European Union.
Inside Europol, officials describe the plan as “Strategic Objective No. 1” — to turn the agency into the EU’s “criminal information hub”: a vast clearinghouse of personal data drawn from across the bloc, as well as from third countries and private partners with which it has agreements. To critics, it looks less like coordination and more like a quiet experiment in mass data acquisition and surveillance.
Internal Europol documents obtained by this investigation, and reviewed by data protection and AI experts, reveal how the agency is pursuing that ambition: At the center of the effort is artificial intelligence. Europol’s leaders see AI as the key to making sense of the oceans of information now flowing through the agency: from chat-app takedowns to biometric databases.
An investigation published by Solomon, Computer Weekly (UK) and Netzpolitik (Germany) finds that since 2021, Europol has embarked on a largely secretive campaign to develop machine-learning models that could help determine how policing is done across the EU and beyond.

The internal records, along with interviews with officials and regulators, raise fundamental questions: How much data can a police agency collect in the name of security? And what happens when automation enters policing with no effective checks in place?
Europol said in a written response to this investigation that it “maintains an impartial position towards every stakeholder, in order to fulfil its mandate – support national authorities combat serious and organised crime and terrorism,” and that the agency will “be at the forefront of law enforcement innovation and research.”
[See the key documents behind this investigation.]
Europol’s AI experiment began, in part, as a byproduct of unrelated operations.
In 2020 and 2021, three “mega-hack” operations gave police across Europe access to millions of messages sent through supposedly encrypted phones used by criminal networks. The operations targeted three services: EncroChat, SKY ECC and ANOM.
Europol’s role was meant to stay focused on transferring the hacked data between national authorities.
But the agency kept copies of the entire trove – more than 60 million messages from EncroChat alone, and more than 27 million from ANOM. It began combing through them on its own servers in The Hague. The agency’s analysts were tasked with finding leads buried in vast amounts of information, including images, files, and text. Very quickly, they realized the volume of data was beyond what humans could handle alone.
Still, the experience planted a seed: if human investigators could not read through the data, perhaps algorithms could. The motive was there: at stake were criminals escaping and lives being lost.
Internal documents show that by late 2020, Europol was planning to train seven machine-learning models on the EncroChat data to flag suspicious conversations automatically. This was the agency’s first real experiment with artificial intelligence.
The legality of retaining and analyzing the EncroChat dataset has since been legally challenged, with a related case now pending before the Court of Justice of the European Union.
When ten inspectors from the European Data Protection Supervisor (EDPS), the EU’s privacy watchdog, landed at Europol’s headquarters in September 2021 to review the project, they found a striking lack of safeguards. Almost no documentation on monitoring the training was drafted during the period in which the models were developed.
“All the documents the EDPS received about the development of models were drafted after the development stopped,” the inspectors wrote, referring to the conclusion of the project following a prior consultation with the watchdog. The documents, they went on, “only reflect partially the Data and AI unit’s developing practices”. The report also noted that officials had failed to consider key risks — including bias in the training and use of machine-learning models, and questions about their statistical accuracy.
By February 2021, the agency had pulled the plug on the EncroChat experiment after the EDPS signaled it would need to monitor the project more closely — scrutiny that Europol appeared eager to avoid. Still, the brief foray revealed both Europol’s ambitions and its willingness to stretch the rules to pursue them.
Inside Europol, though, there seemed to be no reason for concern, according to findings later detailed in a December 2022 EDPS inspection report on the agency’s early experiments with machine-learning models. Europol analysts believed the risk of an algorithm wrongly implicating someone was minimal, and the models were never deployed operationally. At the time, the agency’s legal framework did not even grant it an explicit mandate to develop or use artificial intelligence in criminal investigations.
That would soon change.
By mid-2022, Europol’s ambitions finally caught up with its authority.
A new regulation, quietly approved that June, gave the agency sweeping powers to develop and deploy artificial intelligence tools and, for the first time, to exchange operational data directly with private companies.
Almost immediately, Europol found a cause that could make its use of AI politically untouchable: the fight against online child sexual abuse.
The timing was convenient. Just a month earlier, the European Commission had proposed a controversial law that would require digital platforms to scan private messages for abusive material — a plan critics said would undermine encryption and enable mass surveillance. Europol’s leaders saw an opening.
In an internal meeting with a senior Commission official, they argued that such technology could be tweaked to scan for other purposes, not just child sexual abuse material, or CSAM. The agency’s message was clear: “All data is useful and should be passed to law enforcement,” according to minutes of a 2022 meeting between Europol and a senior official from the European Commission’s Directorate-General for Home Affairs. “Quality data was needed to train algorithms,” the Europol official said.
They urged the Commission to ensure that police, including Europol itself, could freely “use AI tools for investigations” unencumbered by the limits set out in the AI Act, the Union’s then forthcoming law to restrict the use of algorithms deemed intrusive or high risk. Many of Europol’s systems would likely fall into this category.
Europol’s concerns about the restrictive regime set by the AI Act echoed those of major players in the private sector. The proximity between Europol and private agendas in the name of innovation and AI development is no secret. On the contrary, the agency’s own documents often note that maintaining close contact with technology developers is considered strategically important.
One important point of contact has been Thorn, a US nonprofit that builds AI tools to help police detect images of child sexual abuse online. One of the company’s flagship technologies is a “classifier”: an algorithm trained to automatically sort through vast quantities of digital material, classifying it and flagging content it deems may depict abuse.
Since 2022, Thorn has been at the forefront of a lobbying campaign in Brussels supporting the Commission’s proposal that would require messaging platforms to use AI classifiers like its own.
Behind the scenes, Thorn and Europol were already working hand in hand.
A cache of email exchanges between the company and the agency, spanning September 2022 to May 2025 and obtained through a series of freedom of information requests, shows Europol officials asking the company to help them access confidential technical material as they experimented with developing their own classifier.
In one exchange, sent just before the new Europol regulation took effect, an agency official asked whether staff working at Analysis Project Twins, Europol’s unit covering CSAM, could gain access to certain Thorn resources. “I have to stress this document is Confidential and not for re-distribution”, a Thorn representative replied.

Five months later, Europol asked Thorn for help accessing classifiers developed in a project it had taken part in, so the agency could evaluate them.
According to machine-learning expert Nuno Moniz, the exchanges raise serious questions about the relationship between the two actors. “They are discussing best practices, anticipating exchange of info and resources, essentially treating Thorn as a law enforcement partner with privileged access,” said Moniz, who is an Associate Research Professor at the Lucy Family Institute for Data & Society at the University of Notre Dame in Indiana.
The correspondence indicates that as Thorn engaged with Europol on the technical details of its own classifier plans, the company was also granted unusual visibility into the agency’s internal AI plans — access that no other external actor is known to have enjoyed.
The close collaboration continued. The emails show Thorn’s staff meeting Europol officials to “catchup over lunch,” and being invited to present their classifier to Europol’s CSAM team, AP Twins, at the agency’s headquarters in The Hague.

In the most recent exchange obtained by this investigation, from May 2025, Thorn discussed with Europol counterparts its rebranded CSAM classifier.
Europol insists it “has not, to date, purchased any CSAM software product from Thorn.” Much of its correspondence with the company remains heavily redacted or withheld entirely, despite a call from the European Ombudsman for broader disclosure. Europol argues that some undisclosed records “contain strategic information of operational relevance regarding Europol’s working methods in relation to the use of image classifiers”, including specific systems discussed internally and with Thorn.
Asked about the findings of this investigation, Thorn’s director of policy, Emily Slifer, said in a statement that “given the nature and sensitivity” of the company’s work, it does not comment on interactions with specific law enforcement agencies. “As is true for all of our collaborations, we operate in full compliance with applicable laws and uphold the highest standards of data protection and ethical responsibility.”
In a statement to this investigation, Europol said that the agency’s “approach of cooperation is guided by the principle of transparency,” adding that “not a single AI model from Thorn has been considered for use by Europol. Hence, there is no collaboration with developers of Thorn for AI models in use, or intended to be made use of by Europol.”
The opaqueness of Europol’s partnership with Thorn is only one part of a wider problem: the agency’s growing secrecy around its use of AI.
Europol has repeatedly refused to release key documents about its AI program — including data protection impact assessments, “model cards” explaining how algorithms were developed, and minutes of management board meetings.
When records are released, they are often so heavily redacted that they reveal little. In some cases, the agency has missed statutory deadlines by weeks. In most cases, the agency has cited “public security” and “internal decision-making” exemptions to justify withholding information.
The European Ombudsman, however, has repeatedly questioned the vagueness of those claims in preliminary findings, noting that Europol has failed to explain how disclosure would concretely endanger its operations.
Final decisions on five transparency complaints filed by this investigation are now pending before the European Ombudsman.
The opacity reflects a deeper weakness in Europol’s oversight system — the effectiveness of which was supposed to strengthen as the agency’s powers expanded, but so far hasn’t.
Inside Europol, responsibility for monitoring fundamental rights rests largely with the Fundamental Rights Officer, a position created under the agency’s new mandate to ease fears of abuse. But the role, appointed in 2023, has little authority. Its opinions are nonbinding and carry no enforcement power.

“Europol’s Fundamental Rights Officer does not function as an effective safeguard against the risks posed by the agency’s increasing use of digital technologies. The role is institutionally weak, lacking internal enforcement powers to ensure that its recommendations are followed,” says Bárbara Simão, an AI accountability expert at Article 19, a London-based human rights organization that tracks the impact of surveillance and AI technologies on freedom of expression, who reviewed several FRO ‘non-binding’ assessments of Europol’s AI tools obtained by this investigation.
“To fulfill its role as an internal oversight mechanism, it must move beyond a symbolic function, properly scrutinise the technologies being deployed and be given genuine authority to uphold fundamental rights”, she added.
Many of those reports quietly admit as much. “At this moment, no tools exist for the fundamental rights assessment of tools using artificial intelligence,” one reads. The office’s review process, it adds, was not based on any established methodology, but rather “inspired” by ethics guidelines, including those found in The Responsible Administrator, a textbook from 1998.
When questioned by the European Ombudsman about its lack of transparency in handling freedom of information requests, Europol pointed to the Joint Parliamentary Scrutiny Group (JPSG), tasked with monitoring the agency, as evidence that its “legitimacy and accountability” were safeguarded. In reality, the scrutiny group’s powers are limited to asking questions and requesting documents from the agency.
That leaves the European Data Protection Supervisor, the bloc’s privacy watchdog, as the last real check on the agency’s expanding powers. “It is crucial that these [Europol’s] activities do not lead to the overretention of data or the development of flawed tools for operational use across the EU”, the EDPS told this investigation. But with limited staff, resources, and a data-protection-focused mandate, the EDPS is not equipped to oversee every aspect of the rise of artificial intelligence in European policing.
By the summer of 2023, building an in-house AI classifier had become the top priority for Europol’s AI program.
An internal advisory document from the agency’s Fundamental Rights Office described the plan: to create “a tool that uses artificial intelligence (AI) to classify automatically alleged child sexual abuse (CSE) [child sexual exploitation] images and video”. The document acknowledged one crucial risk: that biased training data could make the system more likely to identify abuse in images featuring certain races or genders. But it offered only a four-line note suggesting that data should be “balanced” in order “to limit the risk the tool will recognise CSE only for specific races or genders”.
Training would rely on two datasets, known material depicting abuse and “non-CSE” imagery. It remains unclear where the agency would obtain the latter. The child abuse material, officials wrote, would come primarily from the National Center for Missing and Exploited Children (NCMEC), a US-based nonprofit that collects reports from tech companies and works closely with North American law enforcement.
Although Europol eventually put plans to train its own classifier to the back burner, around the same period data supplied by the NCMEC began feeding directly into the agency’s first automated model to go fully operational, which was quietly deployed that October.
Named EU-CARES (EU Child Abuse Referral Service), the system now acts as an around-the-clock clearinghouse for child abuse reports arriving from the US. When US internet companies such as Meta or Google flag potentially abusive images or videos, they are legally required to send them to NCMEC. Referrals linked to EU activity are then transmitted to Europol. EU-CARES automatically downloads each file, cross-checks the information against Europol’s internal databases, and dispatches the results to police forces in EU member states, often within minutes.
Before automation, that entire process was done manually by analysts in The Hague. But as the number of reports exploded, fed by the massive data streams of American tech giants, the backlog became unmanageable.
Automation solved one problem but created another. Europol’s own assessment warned of the risk of “incorrect data reported by NCMEC” and “incorrect cross-match reports associated with referral”. In other words, the system could misidentify people or link innocent individuals to child-abuse investigations.
The EDPS warned that such errors could have “severe consequences” and ordered Europol to adopt stronger safeguards. It asked the agency to address risks such as wrongly cross-matched reports and the inclusion of inaccurate or miscategorized data — for example, identifiers belonging to people whose social media accounts had been stolen and used to share illegal material — which could wrongfully link innocent individuals to child abuse investigations.
In response, Europol committed to marking suspect data as “unconfirmed”, adding “enhanced” trigger alerts for anomalies, and improving its system for removing retracted referrals. Among other measures, the agency said these steps would address the EDPS’ concerns about accuracy and cross-match errors.
In February 2025, Europol’s director, Catherine De Bolle, told lawmakers that EU-CARES had processed more than 780,000 referrals since the system went operational. How many were accurate is unknown. With manual processing removed, Europol leaves it to member states to determine the validity of what they receive. The German federal police, which receives NCMEC reports directly without using Europol’s system, told this investigation that 48.3 percent of the 205,728 reports it received in 2024 had no investigative value.

Even as the EU’s data protection watchdog pressed for safeguards on EU-CARES, Europol was expanding automation into another sensitive field: facial recognition.
Since 2016, the agency has tested and purchased several commercial tools. Its latest acquisition, NeoFace Watch, from Japanese tech firm NEC, was meant to eventually replace or complement an earlier system known as FACE, which could already access about one million facial images by mid-2020.
Heavily redacted correspondence shows that by May 2023, Europol was already discussing the use of NeoFace Watch. When it later submitted the new program for review, the EDPS warned of the “risk of lower accuracy processing for the faces of minors (as a form of bias),” and “of incoherent processing” if old and new systems (such as the existing FACE and the NeoFace Watch) ran in parallel. After the consultation, Europol decided to exclude the data of minors under the age of 12 from being processed, as a precaution.
Europol’s submission to the EDPS cited two studies by the US National Institute of Standards and Technology (NIST) to justify its choice of NeoFace as its new facial recognition system.
In one of the studies, NIST specified that it did not use “wild images” sourced “from the Internet nor from video surveillance,” which are the kinds of sources Europol would use. In a related report, NIST evaluations for NEC’s algorithm documented that using photos in poor light conditions had an identification error rate of up to 38 percent.
Europol signed a contract with NEC in October 2024. Similar deployments of NeoFace Watch in the UK have faced legal challenges over bias and privacy.
In a nonbinding advisory opinion that November, Europol’s FRO described the system as one that “raises risks of false positives that can “harm the right of defence or of fair trial”. The system is considered high-risk under the new EU AI Act. Nonetheless, the FRO cleared it for use, merely urging the agency to acknowledge when the tool is used in cross-border investigations to “enhance transparency and accountability, key to keep the trust of the public”.
NEC, the technology producer, told this investigation that NeoFace Watch was ranked as “the world’s most accurate solution” at NIST’s most recent testing round. It added that its product “has undergone extensive independent testing by the National Physical Laboratory (NPL) and was found to have zero false positive identifications when used live in typical operational conditions.” The company refrained from commenting on details about its cooperation with Europol.
High-accuracy figures alone do not make facial recognition safe, nor do they address the legal and rights concerns documented in cases like this. Experts, including Luc Rocher, an Associate Professor at the Oxford Internet Institute, have demonstrated that facial recognition evaluation methodologies still fail to fully capture real-world performance, where factors like image quality, population scale, and demographic diversity cause accuracy to degrade significantly — particularly for racial minorities and young people.
Bárbara Simão of Article 19 noted that emphasizing technical performance “tends to downplay risks associated with facial recognition technologies,” including the bias against minors flagged by the EDPS and threats to fair trial rights identified by Europol’s own watchdog.
A binding internal roadmap from 2023 outlines the true scale of Europol’s ambition: 25 potential AI models, ranging from object detection and image geolocation to deepfake identification and personal feature extraction.
The vision would place the agency at the center of automated policing in the EU, as tools deployed by Europol could virtually be used by all law enforcement bodies across the bloc.
In February 2025, Europol’s executive director, Catherine De Bolle, told European lawmakers that the agency had submitted ten data protection impact assessments, seven for models already being developed and three for new ones, to the EDPS.
Members of the Joint Parliamentary Scrutiny Group asked Europol to provide a detailed report on its AI program. When the agency delivered, it sent lawmakers a four-page paper with generic descriptions of its internal vetting processes, without any substantive information on the AI systems themselves.
Saskia Bricmont, a Belgian member of the European Parliament with the Greens and a longtime member of Europol’s Joint Parliamentary Scrutiny Group, told this investigation that the AI systems Europol is developing “can entail very strong risks and consequences for fundamental rights” and that “strong and effective supervision” is “crucial.”
But she added that, despite the information Europol has provided to the committee, “it remains very complex for MEPs to fulfill their monitoring task and fully assess the risks associated with the use of AI-based systems by the agency.”
The European Commission has announced reforms to strengthen Europol and turn it into “a truly operational police agency.” While the details of this transformation remain unclear, the Commission has already proposed doubling Europol’s budget for the next financial term to €3 billion in public funds.
[See the key documents behind this investigation.]
This investigation was supported by Investigative Journalism for Europe (IJ4EU) and Lighthouse Reports.
Before you go, can you chip in?
Quality journalism is not of no cost. If you think what we do is important, please consider donating and becoming a reader who makes our work possible.