From Religious Figures to Journalists, and Minors in Peril

In the intricate web of the digital realm, the Balkans in September experienced a series of incidents involving religious figures, contentious content and the ensuing digital outrage.

In Bosnia and Herzegovina, an alarming incident involving domestic violence and the subsequent online backlash prompted the controversial comments of a Catholic priest. Meanwhile, in Romania, the cancellation of screenings about an Orthodox priest sparked debate, while in North Macedonia a priest made unverified claims about LGBT-themed content, inciting hate speech.

In Montenegro, journalist Balša Knežević found himself entangled in legal troubles due to online content targeting the Serbian Orthodox Church. Journalists in Serbia faced a convergence of physical threats and online abuse, putting their work and safety further at risk.

Albanian online media outlet JOQ meanwhile unearthed a series of disturbing incidents involving minors in the digital space, raising concerns about the safety of the younger generation.

Religious figures trigger digital controversies

In the West Herzegovina Canton of the Federation entity in Bosnia, Denis Buntić, a former handball player, found himself at the centre of allegations of domestic violence. Klara, his wife, reported the incident, sharing a harrowing video capturing a violent altercation. The video revealed her desperate attempts to defend herself against Buntić, all while an infant, just a few months old, was present. As the media extensively covered this disturbing incident and the video went viral, Klara Buntić became the target of a barrage of chauvinistic insults on social media. She was labeled a “bad actress”, criticized for her appearance with derogatory terms like “silicone girl” and unfairly blamed for provoking the incident.

The aftermath of the incident stirred a public outcry. Amid the discussions, Jesuit priest Ike Mandurić entered the fray on Facebook, sparking more controversy with his comments. In his status update, Fr Mandurić downplayed the severity of the incident and expressed discriminatory and chauvinistic opinions. He insinuated that women were primarily at fault and criticized men who showed empathy, stating: “If you have endured such hysteria and whining, I admire you! Boys, kudos to you!” The remark drew widespread condemnation, leading Mandurić to ultimately delete it.


An elderly woman wearing a face mask prays a day before Good Friday in orthodox church in Skopje, Republic of North Macedonia, 16 April 2020. Photo: EPA-EFE/GEORGI LICOVSKI

Beyond Bosnia, the Municipal Cultural Centre in Arad, western Romania, found itself in the midst of a different form of controversy after it cancelled two screenings of a documentary about Fr Arsenie Boca, an Orthodox priest and theologian who has become a symbol for Orthodox pilgrims since his death in 1989. The film’s cancellation, just two days before the screenings, was attributed to alleged threats from individuals claiming they would show up to stop the screening. Meanwhile, in North Macedonia, a prominent Macedonian Orthodox priest made unverified claims about LGBT-themed content in textbooks. This ignited a flurry of reactions and hate speech aimed at the LGBT community in the country.

In Montenegro, recent developments in the digital sphere have brought to light a broader narrative involving journalist Balša Knežević and the online publication Portal Aktuelno. Knežević, the editor-in-chief of the portal, recently faced a police interrogation by order of the Higher State Prosecutor’s Office, VDT.  The complaint was made on the grounds of hate speech after the online media called the Serbian Orthodox Church in Montenegro a “Sect of Saint Sava’’ and the “so-called Serbian Orthodox Church’’, insulting numerous believers.

Journalists face physical and online threats in Serbia

In Serbia, journalists are facing a troubling convergence of physical and online threats in the digital age. While their mission is to uncover the truth and report it, they increasingly find themselves at risk, both on the streets and in the virtual world.

Within this realm, journalists who document the world’s events have become unwilling subjects of their own narratives, as physical confrontations disrupt their work. The case of Maja Djuric, a journalist from N1 television, is emblematic of the risks journalists bear in the field. Djuric’s physical assault in Mitrovica, Kosovo, while capturing video material, highlights the perils they encounter, as their pursuit of the truth often puts them on the front lines.


Head of the Serbian government’s Office for Kosovo and Metohija, Marko Djuric, leaves after a press conference in Belgrade, Serbia, 27 March 2018. Photo: EPA-EFE/ANDREJ CUKIC

Simultaneously, digital attacks are on the rise. Accusations and online abuse directed at journalists and media outlets are increasingly common. Television Hepi’s criticism of journalist Brankica Stanković and the Insajder editorial team during a guest appearance by Zoran Ćirjaković underscores the influence of digital platforms in magnifying accusations and insults. These digital threats raise concerns about the integrity and independence of journalism, as journalists find themselves under constant scrutiny.

While social media platforms play a crucial role in shaping public opinion, they can quickly turn into a battleground for journalists, as evidenced by the outpouring of threats and offensive comments in response to a Serbian newspaper’s coverage of the Serbian Orthodox Church’s role in Kosovo. The escalation reached a dangerous point with a threat to “burn down the editorial office” of Danas, underscoring the pressing need to address online safety and hold tech platforms accountable.

As the legal system steps in, Editor-in-Chief Dragoljub Petrovic’s response to threats against Danas newspaper’s editorial team exemplifies the importance of pursuing accountability. The identification and detention of a suspect involved in the threats serve as a beacon of hope for the protection of press freedom in the country.

Disturbing digital space incidents involving minors in Albania

Albanian online media outlet JOQ recently brought to the forefront a series of alarming incidents in the digital sphere. These range from child endangerment to the promotion of narcotics among minors, raising pressing concerns about the digital safety of the younger generation.

JOQ recently brought to light a disturbing case that has ignited outrage within the online community. In one such incident, an Albanian individual shared a TikTok photo of a woman and her infant, with the child depicted with a cigarette in its mouth. Adding fuel to the fire, the photo’s caption brazenly proclaimed: “Big and small gang, we want this to go viral”.

Another troubling incident reported by a concerned citizen has further underscored the perils of social media. JOQ featured a video depicting a minor boy being subjected to a violent assault by his peers. While JOQ did share the video, its primary circulation took place on the Snapchat app, shedding light on the challenges of monitoring and curbing harmful content within various platforms.

In a separate incident, a 31-year-old Albanian citizen from Kurbin found himself in legal jeopardy after broadcasting a live video on TikTok. The video showed the man providing a minor with a cigarette allegedly laced with cannabis. The minor was filmed smoking a cigarette in the company of the adult. The man’s subsequent arrest for “encouraging the use of narcotics” underscores the gravity of promoting harmful behaviours, particularly among impressionable youth.

Bosnia has been covered by Elma Selimovic, Aida Trepanić and Azem Kurtic, Romania by Adina Florea, North Macedonia by Bojan Stojkovski and Goce Trpkovski, Montenegro by Djurdja Radulovic, Albania by Nensi Bogdani, Serbia by Tijana Uzelac and Kalina Simic.

Croatian Journalists Decry Govt Plan to Criminalise Crime Leaks

In January this year, when media in Croatia got hold of correspondence involving a former cabinet minister caught up in a corruption probe, her mention of a certain “A.P.” quickly set tongues wagging. The prime minister then, as now, was Andrej Plenkovic.

The correspondence was being used as evidence in an investigation launched by the European Public Prosecutor’s Office into the allegedly inflated cost of software ordered in 2019 by then EU Funds and Regional Development Minister Gabriela Zalac.

There was nothing in the reports that incriminated the prime minister, but it proved embarrassing nonetheless – a sense of guilt by association.

Now, his government is poised to criminalise the “unauthorised disclosure of the content of investigative or evidentiary action”, a change to the Criminal Code that the media has dubbed ‘Lex AP.’

The government insists the aim is to support the presumption of innocence, protect the privacy of suspects and ensure the independence of the judiciary. Critics, however, say the real objective is to silence journalists who rely on leaks from police, prosecutors and the courts to report on the misdeeds of politicians and public officials.

“This is an unprecedented attack on the freedom of journalism, on the journalistic profession, on whistleblowers,” said Hrvoje Zovko, head of the Croatian Journalists’ Association, HND, who linked it to parliamentary, presidential and European parliamentary elections due next year and the storm over the Zalac correspondence.

“This will not be passed to criminally prosecute the journalist, but to discredit and contaminate the journalist and to ensure that no potential source dares to dial the journalist’s phone number,” Zovko told BIRN. “This will have catastrophic consequences for the journalistic profession, and the public will be deprived of everything.”

Journalists fear ‘sources will dry up’

Last week, a round table on the new law was held at the Croatian Journalists’ Association in Zagreb. Photo: SNH

The proposed change to the law – which foresees punishment of up to three years in prison – was submitted to public consultation on September 22, after which the government will likely send it to parliament.

Croatia’s centre-right government, led by Plenkovic’s Croatian Democratic Union, HDZ, has defended the bill, saying it is in no way directed against journalists.

“It only applies to the participants in the [legal] proceedings,” Justice Minister Ivan Malenica said last month. “Through this criminal act, the victim’s right to privacy and the presumption of innocence are protected.”

Croatia, however, has a history of high-level corruption and ranks 57th out of 180 countries on Transparency International’s perception of corruption index. Such a law will hardly contribute to greater public trust in the integrity of public officials, said Zovko.

The law, he said, will render journalism “meaningless” and simply protect “the political elites, regardless of who is in power”.

“We will fight against it, but at the end of the day, if it is passed, it can be used abundantly by whoever is in power.”

HND member and N1 television journalist Ana Raic warned that “sources will dry up”.

“I hope that there will still be enough brave people who will still want to expose corruption or say that this investigation is taking too long,” Raic told a news conference in October.

Investigative journalists, in particular, face being plunged into “the blackest darkness”, she said, as they try to follow criminal cases and expose wrongdoing.

Criminal offence ‘unnecessary and harmful’

The round table on the new law was attended by a couple of former ministers and several prominent intellectuals. Photo: SNH

While conceding the right and the obligation of the state to prevent the disclosure of information from certain stages of criminal proceedings, lawyer Vesna Alaburic, who frequently defends journalists taken to court over their reporting, said the introduction of such a criminal offence “is unnecessary and harmful”.

“The proposed solution sanctions the disclosure of content of every evidentiary act, regardless of whether that content is at all important for a specific criminal proceeding or perhaps it is about content for which there is a predominant interest of the public,” said Alaburic, noting that Croatian law already allows certain information to be declared secret and any disclosure of such information is considered a crime.

“That’s why I don’t consider it justified that the new criminal offence indiscriminately, without valid justification, prohibits the disclosure of the content of absolutely every evidentiary action,” she told BIRN.

“The public would be denied the exercise of its legitimate ‘right to know’ because even journalists would not be able to convey to the public all the contents of the predominant public interest.”

Alaburic said she did not believe journalists risk prosecution, given the law applies only to the participants in criminal proceedings, but may come under pressure to reveal the sources of their information.

Political reaction to the government’s plans has been muted. One of the few to speak out was MP Damir Bajs of Fokus party.

“With the introduction of a new criminal offence, we will have two Croatias,” he said. “One before and one after the introduction of that criminal act.”

“We will only talk about good things, and it will not be possible to spoil the mood of the government and the prime minister or anyone in power,” Bajs said.

“There is only one question –in whichCroatia do we want to live? In a country where the government can say what it wants, but nothing can be published about them?”

Turkey Increases Crackdown on Journalists, Citing Kurdish Terror Threat

Veteran Turkish journalist Merdan Yanardag was sentenced to two years and six months in prison on Wednesday for “making propaganda for a terrorist organisation” following his criticism of the jail conditions of Abdullah Ocalan, leader of the outlawed Kurdistan Workers’ Party, PKK.

Despite the sentence, Yanardag was released from prison considering he had spent more than three months in prison already.

“I criticized the policies followed by the [ruling] Justice and Development Party. They came to a conclusion that I praised Ocalan, for no reason. Why did you arrest me?” Yanardag asked on Wednesday in a press conference in front of Marmara Prison, formerly known as Silivri Prison, which is famous for holding political prisoners.

“This prison is the symbol of the regime’s tyranny,” Yanardag added.

Another senior journalist, Aysenur Arslan, was taken into police custody and investigated by prosecutors’ office for her comments on last Sunday’s Ankara bombing, which was claimed by the PKK.

Arslan was accused of praising terrorists. Both Arslan and Yanardag were targeted by pro-government media and social media trolls due to their comments.

“I explained what I really said [on TV]. As a result, I was released,” Arslan said on Wednesday in front of the court house in Istanbul.

Turkey’s Radio and Television Supreme Council, RTUK, the government agency for regulating TV and radio broadcasts, has taken punitive measures against Halk TV, citing comments made by Arslan on the channel related to Sunday’s bomb attack in Ankara.

RTUK imposed five programme suspensions, deeming Arslan’s comments a violation of the rules. RTUK also imposed a 3-per-cent fine [of its revenues] on Halk TV for “crossing the line of criticism”.

Yielding to the government of President Recep Tayyip Erdogan, Halk TV has since ended Arslan’s programme and fired her.

“Although terrorism was condemned in the same program, the unfortunate words spoken live in the … program aired yesterday go beyond the limits of Halk TV’s stance and perspective. Therefore, we announce with regret that we have decided to end the program,” Cafer Mahiroglu, chair of the Board of Directors of Halk TV, announced on Tuesday.

Following last Sunday’s bombing, Turkey has intensified its police and military operations against the PKK.

Since Sunday, the Police and Gendarmerie hsve arrested at least 105 people, and two more Kurdish militants were killed in clashes with the Gendarmerie in Agri province on Wednesday.

As the air force continues to bomb PKK targets in northern Iraq, the Turkish Defence Minister said that two of the bombers came to Turkey from parts of northern Syria controlled by the Kurdish People’s Defence Units, YPG, forces backed by the United States.

“We would like everyone to know that all facilities and activities of the PKK and YPG in Iraq and Syria are our legitimate targets,” Defence Minister Yasar Guler said.

Turkey considers the YPG a sister organisation of the PKK.

However, Mazloum Abdi, the leader of the YPG, denied playing any role in the bombing.

“Ankara’s attack perpetrators haven’t passed through our region as Turkish officials claim, and we aren’t party to Turkey’s internal conflict, nor do we encourage escalation. Turkey is looking for pretexts to legitimize its ongoing attacks on our region and to launch a new military aggression that is of deep concern to us,” Abdi said on Twitter on Wednesday. He added that Turkish attacks on cities are war crimes.

Kosovo Bans Serbia Sport TV Channels Over Messages ‘Glorifying’ Banjska Attack

Kosovo’s Independent Media Commission, IMC, on Tuesday urged a halt to broadcasts of Serbian sport TV channels, days after they carried messages supporting the armed Serbs killed during in shootout with Kosovo Police in Serb-majority northern Kosovo on September 24.

“We urge distribution operators to stop broadcasting Arena [Sport] channels,” the head of the IMC Board, Jeton Mehmeti, said in a meeting in which five members of the board supported the ban.

“We now have evidence that Arena Sport channels … broadcasted video messages which come from Serbia and contained glorifications of the terrorist attack in the north, and represent threatening messages to Kosovo citizens,” Mehmeti said.

The decision affects ten Arena Sport channels which are carried on Kosovo’s main cable TV platforms.

Art Motion, one of the Kosovo cable TV networks which carries Arena Sport channels, did not respond to BIRN’s request for comment on Wednesday over the IMC decision. Arena Sport is owned by Telekom Srbija company.

Another cable provider, IPKO, told BIRN on Friday that it will respect the decision “until the provider of these channels fixes this matter definitively with the IMC”.

The IMC is an independent institution responsible for the regulation, management and oversight of the broadcasting frequency spectrum in Kosovo.

Two days before the decision, Mehmeti said the IMC received complaints from viewers that, during the half-time break and after football matches, Arena Sport channels broadcasted messages in support of the armed gunmen who attacked Kosovo Police on September 24 in the village of Banjska in the northern municipality of Zubin Potok. A Kosovo Police officer and three of the gunmen were killed in the shootout.

The other attackers managed to escape through mountainous terrain.

Graphics shown on the TV screen bore the messages: “We will remember” and “Glory to heroes” with a photo of Banjska Monastery, Serbia’s coat of arms and the date “24.09.2023” together with the inscription “Manastir Banjska” at the bottom left of the screen.

The aftermath of the attack has caused controversies inside Kosovo’s public broadcaster, Radio Television of Kosovo, RTK. On September 30, its board suspended Zeljko Tvrdisic, the director of RTK 2, the channel broadcasting in the Serbian language. According to the announcement, the suspension is valid for 30 days.

“After a detailed discussion … regarding the chronicle broadcast on the news of RTK2 dated September 28, 2023, following the proposal of the general director, the Board of RTK has unanimously decided to suspend from office for a period of 30 days the director of RTK 2, Zeljko Tvrdisic,” media cited the RTK Board as saying.

According to the media, RTK 2 news had described the three Serbian gunmen killed in the police action in Banjska as “victims”.

The management of RTK has said it will conduct a detailed analysis of the situation to identify violations of professional standards. RTK has warned of other disciplinary measures.

But the original RTK2 news article that BIRN has seen in fact used the word “stradali” for the killed Serbs, which in Serbian means “died”, “killed”, or “perished”, not “victims”, which is “zrtve” in Serbian.

Tvrdisic told BIRN that he is waiting for the internal commission of the public broadcaster, formed on October 3, to finish its evaluation and inform him about the decision.

He said that it was “scandalous” that he was informed about his suspension by the media on September 30 and only received the official note from RTK on Monday, October 2.

“I am confident that I have acted in accordance with professional standards, the ethics code and the law. The problem is we reported on an event in [the town of] Gracanica, where the lighting of candles for the murdered was organised. It was also stated as a problem that, in a statement, we had the President of the Serbian Journalists Association, who used the word ‘Metohija’,” Tvrdisic told BIRN.

Serbia officially uses the expression “Kosovo and Metohija” for Kosovo – a term which many Kosovars see as implying a Serbian character to Kosovo.

Turkey Ranked Among ‘Worst Countries’ for Internet Freedoms by Freedom House

“Freedom on the Net 2023: The Repressive Power of Artificial Intelligence”, a new report published by human rights watchdog Freedom House, says global internet freedoms declined for the 13th consecutive year – and that Turkey has become one of the worst countries in the world in terms of internet freedoms.

The report underlined that attacks on free expression grew more common around the world while Artificial Intelligence, AI, has allowed governments to enhance and refine online censorship.

“While an improvement in internet freedom was observed in 20 countries around the world this year, a decline was detected in 29 countries, including Turkey. Unfortunately, there is a contraction in internet freedoms around the world as a result of authoritarian pressure,” Gurkan Ozturan, Media Freedom Rapid Response Coordinator at the European Centre for Press and Media Freedom, one of the authors of the Freedom House report, told BIRN.

Freedom House listed Turkey as “not free” in its internet freedoms index, scoring only 30 points in total out of 100 points.

Ozturan added that Turkey has seen one of the most rapid declines in internet freedoms.

“With a 15-point decline in total since 2014, Turkey stands in third worst place, with Venezuela and Uganda, in hardest declining countries after Myanmar, with 30 points and Russia with 19 points,” Ozturan said.

Ozturan said that key developments in Turkey in 2023 included restrictions and censorship, especially in the aftermath of bombings and earthquakes; disinformation campaigns during the election period, the passing of a Disinformation Law, and revelations of mass surveillance by government bodies.

As the report’s title suggest, AI has become a major concern in internet freedoms, as the government uses it at the expense of internet freedoms.

“We see that Artificial Intelligence technology, which created excitement around the world last year, is used by many governments for mass surveillance and censorship purposes. If no regulation is made in the coming period, it would be surprising if these practices do not lead to an even more oppressive internet management and social life,” Ozturan warned.

Increasing Government Control of Internet in Serbia and Hungary

Aleksandar Vucic (R) receives Hungarian Prime Minister Viktor Orban ahead of a meeting of the Hungary-Serbia Strategic Cooperation Council in Palic, Serbia, 20 June 2023. EPA-EFE/Vivien Cher Benko

In addition to Turkey, Hungary and Serbia from Central and Southeastern Europe were also covered by the Freedom House report.

Hungary is listed as “partly free” with 69 points out of 100, but the Hungarian government continues to try to up control of the internet.

“Internet freedom in Hungary remains relatively open, but threats have increased in recent years. Hungary enjoys high levels of overall connectivity and relatively affordable internet access. While there are few overt restrictions on content in Hungary, the government continues to consolidate its control over the telecommunications and media landscape,” the report wrote.

Serbia is listed as “free”, with 71 points – at the edge of free countries in terms of internet freedoms.

“Serbia registered a slight decline in internet freedom during the coverage period. The country features high levels of internet access, limited website blocking and strong constitutional protections for journalists,” the report wrote, but warned about disinformation campaigns and surveillance by the government.

“Pro-government news sites, some of which are connected to the ruling party, engage in disinformation campaigns. The government has reportedly employed trolls on social media to advance its narrative and denigrate critics,” it said.

The surveillance infrastructure poses concerns as well, with research showing that government agencies have used spyware surveillance tools, including Predator. “Journalists continue to face strategic lawsuits against public participation, SLAPPs, concerning ‘insults’ or ‘slander’ against public officials, though detentions and prison sentences in these cases are rare,” the report wrote.

Freedom on the Net project is a collaborative effort between Freedom House and a network of more than 85 researchers, who come from civil society organisations, academia, journalism, and other backgrounds, covering 70 countries.

European Rights Court Faults Turkey in Convictions for Online Posts

Turkey violated the right to freedom of expression in convicting two individuals over their social media posts, one in support of jailed Kurdish leader Abdullah Ocalan and the other describing President Recep Tayyip Erdogan as a “filthy thief”, the European Court of Human Rights ruled on Tuesday.

Baran Burukan was given a prison sentence of just over a year in 2018 after he shared content that contained the words, ‘Long live the Kurdistan resistance’ and ‘Long live Abdullah Ocalan’. Arrested in 1999 and convicted of terrorism, Ocalan, the leader of the outlawed Kurdistan Workers’ Party, PKK, is serving a life sentence.

The second applicant to the Strasbourg court, Ilknur Birol, was given a sentence of 10 months in 2019 for a 2015 tweet in which she wrote: “Tayyip Erdogan filthy thief”. In both cases, the courts suspended the judgment, a measure that requires the consent of the defendant before their guilt is decided.

After their appeals were rejected by Turkish courts, including the Constitutional Court, Birol and Burukan turned to the ECHR.

In its own ruling, the ECHR noted that in a later case, from mid-2022, the Turkish Constitutional Court found fault with the practice of suspending judgment, saying such decisions “were not based on appropriate and sufficient reasons, that the courts failed to give due consideration to the defendants’ arguments in their defence and rejected requests for the gathering and examination of evidence on irrelevant grounds, and that those concerned had neither the help of a defence lawyer or the necessary time and facilities to prepare their defence adequately”.

Subsequent appeals in such cases were ineffective given that the courts often relied on “insufficient, formulaic reasoning while only conducting a merely formal examination, on the basis of the case file, without weighing up the interests at stake”, the ECHR cited the Constitutional Court as ruling. It also said that the practice of asking a defendant to consent to a suspended judgment at the very outset – before his or her guilt had been decided – “was likely to exert pressure on him or her and to give rise to a perception of his or her guilt in the judge’s mind, without being counterbalanced by any fair trial safeguards”.

The ECHR said it saw “no reason to find otherwise” and described the problem as systemic.

“The Court held that, in view of their potentially chilling effect, the criminal convictions, together with the decisions to suspend the judgments (subject to probation periods of three and five years respectively) constituted an interference with the applicants’ right to freedom of expression,” it said, and ordered Turkey to pay each applicant 2,600 euros in respect of non-pecuniary damage.

Over the past several years, journalists, academics, politicians, and private individuals have been taken to court over their social media posts since the adoption of draconian laws and regulations under Erdogan’s increasingly autocratic rule.

In 2021, an investigation by independent media outlet Gazete Duvar found that more than 128,000 investigations were launched between 2014 and 2019 concerning alleged insults against Erdogan, resulting in more than 27,000 criminal cases launched by prosecutors.

Europol Sought Unlimited Data Access in Online Child Sexual Abuse Regulation

The European police agency, Europol, has requested unfiltered access to data that would be harvested under a controversial EU proposal to scan online content for child sexual abuse images and for the AI technology behind it to be applied to other crimes too, according to minutes of a high-level meeting in mid-2022.

The meeting, involving Europol Executive Director Catherine de Bolle and the European Commission’s Director-General for Migration and Home Affairs, Monique Pariat, took place in July last year, weeks after the Commission unveiled a proposed regulation that would require digital chat providers to scan client content for child sexual abuse material, or CSAM.

The regulation, put forward by European Commissioner for Home Affairs Ylva Johansson, would also create a new EU agency – the EU centre to prevent and counter child sexual abuse. It has stirred heated debate, with critics warning it risks opening the door to mass surveillance of EU citizens.

In the meeting, the minutes of which were obtained under a Freedom of Information request, Europol requested unlimited access to the data produced from the detection and scanning of communications, and that no boundaries be set on how this data is used.

“All data is useful and should be passed on to law enforcement, there should be no filtering by the [EU] Centre because even an innocent image might contain information that could at some point be useful to law enforcement,” the minutes state. The name of the speaker is redacted, but it is clear from the exchange that it is a Europol official.

The Centre would play a key role in helping member states and companies implement the legislation; it would also vet and approve scanning technologies, as well as receive and filter suspicious reports before passing them to Europol and national authorities.


Minutes from the Europol commission obtained by BIRN.

In the same meeting, Europol proposed that detection be expanded to other crime areas beyond CSAM, and suggested including them in the proposed regulation. It also requested the inclusion of other elements that would ensure another EU law in the making, the Artificial Intelligence Act, would not limit the “use of AI tools for investigations”.

The Europol input is apparent in Johansson’s proposal. According to the Commission text, all reports from the EU Centre that are not “manifestly unfounded” will have to be sent simultaneously to Europol and to national law enforcement agencies. Europol will also have access to the Centre’s databases.

Several data protection experts who examined the minutes said Europol had effectively asked for no limits or boundaries in accessing the data, including flawed data such as false positives, or in how it could be used in training algorithms.

Niovi Vavoula, a data protection expert at the Queen Mary University of London, said a reference in the document to the need for quality data “points to the direction that Europol will use the data to train algorithms, which according to the recent Europol reform is permitted”.

Europol’s in-house research and development centre, the Innovation Hub, has already started working towards an AI-powered tool to classify child sexual abuse images and videos.

According to an internal Europol document, the agency’s own Fundamental Rights Officer raised concerns in June 2023 about possible “fundamental rights issues” stemming from “biased results, false positives or false negatives”, but gave the project the green light anyway.

In response, Europol declined to comment on internal meetings, but said: “It is imperative to highlight our organisation’s mission and key role to combat the heinous crime of child sexual abuse in the EU. Regarding the future EU Centre on child sexual abuse, Europol was rightfully consulted on the interaction between the future EU Centre’s remit and Europol. Our position as the European Agency for Law Enforcement Cooperation is that we must receive relevant information to protect the EU and its citizens from serious and organised crime, including child sexual abuse.”


Illustrative photo by Alexas_Fotos Pixabay

Staff links

On September 25, BIRN in cooperation with other European outlets reported on the complex network of AI and advocacy groups that has helped drum up support for Johansson’s proposal, often in close coordination with the Commission. There are links to Europol too.

According to information available online, Cathal Delaney, a former Europol official who led the agency’s Child Sexual Abuse team at its Cybercrime Centre, and who worked on a CSAM AI pilot project, has begun work the US-based organisation Thorn, which develops AI software to target CSAM.

Delaney moved to Thorn immediately after leaving Europol in January 2022 and is listed in the lobby register of the German federal parliament as an “employee who represents interests directly”. 

Transfers of EU officials to the private sector to work on issues related to work carried out in their last three years of EU engagement require formal permission, which can be denied if it is deemed that such work “could lead to a conflict with the legitimate interests of the institution”.

In response, Europol said: “Taking into account the information provided by the staff member and in accordance with Europol’s Staff Regulation, Europol has authorised the referred staff member to conclude a contract with a new employer after his end of service for Europol at the end of 2021”.

In June, Delaney paid a visit to his former colleagues, writing on Linkedin: “I’ve spent time this week at the #APTwins Europol Annual Expert Meeting and presented on behalf of Thorn about our innovations to support victim identification.”


Illustrative photo by EPA-EFE/RONALD WITTEK

A senior former Europol official, Fernando Ruiz Perez, is also listed as a board member of Thorn. According to Europol, Ruiz Perez stopped working as Head of Operations of the agency’s Cybercrime Centre in April 2022 and, according to information on the Linkedin profile of Julie Cordua, Thorn’s CEO, joined the board of the organisation at the beginning of 2023.

Asked for comment, Thorn replied: “To fight child sexual abuse at scale, close collaboration with law enforcement agencies like Europol are indispensable. Of course we respect any barring clauses in transitions of employees from law enforcement agencies to Thorn. Anything else would go against our code of conduct and would also hamper Thorn’s relationships to these agencies who play a vital role in fighting child sexual abuse. And fighting this crime is our sole purpose, as Thorn is not generating any profit from the organization’s activities.”

Alongside Ruiz Peréz, on the board of Thorn is Ernie Allen, chair of the WeProtect Global Alliance, WPGA, and former head of the National Centre for Missing & Exploited Children, NCMEC, a US organisation whose set-up fed into the blueprint for the EU’s own Centre.

Europol has also co-operated with WeProtect, a putatively independent NGO that emerged from a fusion of past European Commission and national government initiatives and has been a key platform for strategies to support Johansson’s proposal.

“Europol can confirm that cooperation with the WPGA has taken place since January 2021, including in the context of the WPGA Summit 2022 and an expert meeting organised by Europol’s Analysis Project (AP) Twins (Europol’s unit focusing on CSMA)” the agency said.

This article is part of an investigation supported by the IJ4EU programme, versions of the article are also published by Netzpolitik and Solomon.

Greek Media Freedom Hit by Surveillance, Lawsuits and Threats: Report

The initial findings of a report published on Wednesday by eight international media freedom organisations said that press freedom in Greece is under “sustained threat” from the impact of the ‘Predatorgate’ spyware surveillance scandal, abusive lawsuits and physical threats against journalists, as well as economic and political pressures on media.

“While Greece has a small but highly professional group of independent and investigative media doing quality public interest reporting, these outlets remain isolated on the fringes of the media landscape and lack systemic support,” said the International Press Institute’s advocacy officer, Jamie Wiseman, at the launch of the report at the Journalists’ Union of Athens Daily Newspapers.

The report noted how journalists and politicians, among them the leader of the opposition party PASOK were placed under surveillance by the Greek secret services using an illegal spyware called Predator.

It also noted how the 2021 murder of the veteran crime journalist Giorgos Karaivaz remains unresolved.

It said that abusive lawsuits – so-called SLAPPs, Strategic Lawsuits Against Public Participation – and physical attacks against journalists, have been weaponised to silence critical voices by exhausting them financially and psychologically.

“Especially for smaller outlets and freelance journalists, SLAPPs pose an existential threat as often the compensation demanded greatly exceeds their resources, which further exacerbates their intended chilling effect beyond the targeted journalist,” said the report.

The report was produced after a visit to Greece by a delegation composed of the six members of the Media Freedom Rapid Response: ARTICLE 19 Europe, the European Centre for Press and Media Freedom, the European Federation of Journalists, Free Press Unlimited, the International Press Institute and the Osservatorio Balcani e Caucaso Transeuropa. They were joined by representatives of the Committee to Protect Journalists and Reporters Without Borders.

The eight organisations called on the Greek government and prime minister “to show political courage and urgently take specific measures aimed at improving the climate for independent journalism and salvaging press freedom”.

A more detailed report with expanded recommendations will be published in the coming weeks, they said.

The Ethics of Using ChatGPT in Education and Academia

The rapid advancement of artificial intelligence, AI, especially Large Language Models, LLMs, such as ChatGPT, has ushered in a new era of possibilities across various sectors, including education and academia.

ChatGPT, Chat Generative Pre-Trained Transformer, is an AI chatbot model developed and trained by OpenAI, a research organisation focused on advancing AI and launched by them in November 2022. It uses deep learning techniques to generate human-like text based on the input provided.

There is a tendency for younger people to adopt new technologies more readily. AI technology provides a great opportunity, especially for younger students and researchers, to learn and increase their productivity, research output, and quality.

The potential applications of ChatGPT in education are vast, ranging from helping with tests and essays to providing personalised tutoring. This technology can better meet students’ learning needs, improving their efficiency and grades. ChatGPT can also help teachers plan lessons and grade papers and stimulate student interests.

ChatGPT has become an attractive tool for various applications in the academic world. It can generate ideas and hypotheses for research papers, create outlines, summarise papers, draft entire articles, and help with editing.

These capabilities significantly reduce the time and effort required to produce academic work, potentially accelerating the pace of scientific discovery or overcoming writer’s block, a common problem many academics face.

ChatGPT, and LLMs like it, can assist researchers in various tasks, including data analysis, literature reviews, and writing research papers. One of the most significant advantages of using ChatGPT in academic research is its ability to analyse large amounts of data quickly. These tools can process texts with extraordinary success and often in a way that is indistinguishable from the human output.

The limitations of these LLMs, such as their brittleness [susceptibility to catastrophic failure], unreliability [false or made-up information], and the occasional inability to make elementary logical inferences or deal with simple mathematics, represent a decoupling of agency and intelligence.

But is ChatGPT a replacement for human authorship and critical thinking, or is it merely a helpful tool?


Photo by EPA/RITCHIE B. TONGO

Plagiarism, copyright, and integrity

While ChatGPT has the potential to revolutionise the way we approach education and research, its use in these fields brings many ethical issues and challenges that need to be considered. These concern plagiarism, copyright, and the integrity of academic work. ChatGPT can produce medium-quality essays within minutes, blurring the lines between original thought and automated generation.

First and foremost is the issue of plagiarism, as the model may generate text identical or similar to existing text. Plagiarism is copying someone else’s work, or simply rephrasing what was written without personally adding anything.

Since ChatGPT generates text based on a vast amount of data from the Internet, there is a risk that the tool may inadvertently produce text that closely resembles existing work. Students may be tempted to use verbatim in their work text produced by ChatGPT.

This raises questions about the originality of the work produced using ChatGPT and whether it constitutes plagiarism. It is difficult to ascertain the extent of the contribution made by the AI tool versus the human researcher, which further complicates the issue of authorship, credit, and intellectual property. A related concern is that using ChatGPT may lessen critical thinking and creativity.

Plagiarism, however, predates AI, as Serbia knows. Several cases involving public officials have come to light in recent years, before ChatGPT, including plagiarism of a PhD thesis that was copied from other people’s work.

Another ethical concern relates to copyright infringement. If ChatGPT generates text that closely resembles existing copyrighted material, using such text in an academic article could potentially violate copyright laws.

Using ChatGPT or similar LLMs becomes both a moral and legal issue. The need for legislation specifically regulating the use of Generative AI represents a significant challenge for its application in practice.

Using text-generating tools in scholarly writing presents challenges to transparency and credibility. Universities, journals, and institutes must revise their policies on acceptable tools.


Photo by EPA/RITCHIE B. TONGO

To ban or not to ban?

Given the concerns raised by academicians globally, many schools and universities have banned ChatGPT, although students use it anyway. Additionally, some advocate for carefully using it and not banning ChatGPT, but teaching with it because cheating using different tools is inevitable.

Further, significant questions about copyright have emerged, especially given the broad application of ChatGPT in academic spheres, content creation, and its use by students for completing academic tasks. The questions are: Who holds the intellectual property rights for the content produced by ChatGPT, and who would be liable for copyright violation?

Many educational institutions have already prohibited the use of ChatGPT, while prominent publishers such as Elsevier and Cambridge University Press authorise chatbots for academic writing. However, the guidelines for using AI in science still need to be provided.

AI tools such as ChatGPT in academic research are currently a matter of debate among journal editors, researchers, and publishers. There is an ongoing discussion about whether citing ChatGPT as an author in published literature is appropriate.

It is also essential for academic institutions and publishers to establish guidelines and policies for using AI-generated text in academic research. Governments and relevant agencies should develop corresponding laws and regulations to protect students’ privacy and rights, ensuring that the application of AI technology complies with educational ethics and moral standards.

The legislative procedure in the EU is still ongoing, and there are estimates that it will take years before the regulations begin to be implemented in practice. Legislation that would regulate the application of ChatGPT in practice, especially in academia in Serbia, also does not exist.

Recently, researchers have been caught copying and pasting text directly from ChatGPT, forgetting to remove its phrase ‘As an AI language model…’ and publishing peer-reviewed papers in prominent scientific journals.

For example, in the European International Journal of Pedagogics, in a paper titled Quinazolinone: Pharmacophore with Endless Pharmacological Actions, the authors in the section Methods pasted ChatGPT’s answer “as an AI language model; I don’t have access to the full text of the article…”

This has also been the case in some PhD and MA theses.


Photo by EPA-EFE/RONALD WITTEK

Need for guidelines

Emerging in response to the challenge of plagiarism are AI-text detectors, software specifically designed to detect content generated by AI tools.

To address these concerns regarding plagiarism, some scientific publishers, such as Springer Nature and Elsevier, have established guidelines to promote the ethical and transparent use of LLMs. These guidelines advise against crediting LLMs as authors on research papers since AI tools cannot take responsibility for the work. Some guidelines call for the use of LLMs to be documented in their papers’ methods or acknowledgments sections.

To prevent plagiarism using ChatGPT or other AI language models, it is necessary to educate students on plagiarism, what it is, and why it is wrong; to use plagiarism detection tools; and to set clear guidelines for the use of ChatGPT and other resources.

To ensure accountable use of this AI model, it is essential to establish guidelines, filters, and rules to prevent users’ misuse of unethical language.

Despite the concerns mentioned above, before discussing whether AI tools such as ChatGPT should be academically banned, it is necessary to examine the challenges currently faced by education, and the significant impact and benefits of using ChatGPT in education.

The application of ChatGPT raises various legal and ethical dilemmas. What we need are guidelines, policy and regulatory recommendations, and best practices for students, researchers, and higher education institutions.

A tech-first approach, relying solely on AI detectors, can lead to potential pitfalls. Mere reliance on technological solutions can inadvertently create an environment of suspicion and surveillance and shift the focus from fostering a culture of integrity to one of surveillance and punishment – the importance of establishing a culture wherein academic honesty is valued intrinsically and not just enforced extrinsically.

Striking the right balance between leveraging the benefits of ChatGPT and maintaining the integrity of the research process will be vital to navigating the ethical minefield associated with using AI tools in academia.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she deals with topics of ethics of AI, Applied and Normative Ethics, Bioethics.

‘Who Benefits?’ Inside the EU’s Fight over Scanning for Child Sex Content

In early May 2022, days before she launched one of the most contentious legislative proposals Brussels had seen in years, the European Union’s home affairs commissioner, Ylva Johansson, sent a letter to a US organisation co-founded in 2012 by the movie stars Ashton Kutcher and Demi Moore.

The organisation, Thorn, develops artificial intelligence tools to scan for child sexual abuse images online, and Johansson’s proposed regulation is designed to fight the spread of such content on messaging apps.

“We have shared many moments on the journey to this proposal,” the Swedish politician wrote, according to a copy of the letter addressed to Thorn executive director Julie Cordua and which BIRN has seen.

Johansson urged Cordua to continue the campaign to get it passed: “Now I am looking to you to help make sure that this launch is a successful one.”

That campaign faces a major test in October when Johansson’s proposal is put to a vote in the Civil Liberties Committee of the European Parliament. It has already been the subject of heated debate.

The regulation would obligate digital platforms – from Facebook to Telegram, Signal to Snapchat, TikTok to clouds and online gaming websites – to detect and report any trace of child sexual abuse material, CSAM, on their systems and in their users’ private chats.

It would introduce a complex legal architecture reliant on AI tools for detecting images, videos and speech – so-called ‘client-side scanning’ – containing sexual abuse against minors and attempts to groom children.

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

The EU’s top data protection watchdog, Wojciech Wiewiorowski, warned Johansson about the risks in 2020, when she informed him of her plans.

They amount to “crossing the Rubicon” in terms of the mass surveillance of EU citizens, he said in an interview for this story. It “would fundamentally change the internet and digital communication as we know it.”

Johansson, however, has not blinked. “The privacy advocates sound very loud,” the commissioner said in a speech in November 2021. “But someone must also speak for the children.”

Based on dozens of interviews, leaked documents and insight into the Commission’s internal deliberations, this investigation connects the dots between the key actors bankrolling and organising the advocacy campaign in favour of Johansson’s proposal and their direct links with the commissioner and her cabinet.

It’s a synthesis that granted certain stakeholders, AI firms and advocacy groups – which enjoy significant financial backing – a questionable level of influence over the crafting of EU policy.

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

“Groups like Thorn use everything they can to put this legislation forward, not just because they feel that this is the way forward to combat child sexual abuse, but also because they have a commercial interest in doing so.”

If the regulation undermines encryption, it risks introducing new vulnerabilities, critics argue. “Who will benefit from the legislation?” Gerkens asked. “Not the children.”

Privacy assurances ‘deeply misleading’


The Action Day promoted by Brave Movement in front of the EP. Photo: Justice Initiative

Star of That ‘70s Show and a host of Hollywood hits, 45-year-old Kutcher resigned as chairman of the Thorn board in mid-September amid uproar over a letter he wrote to a judge in support of convicted rapist and fellow That ‘70s Show actor Danny Masterson, prior to his sentencing.

Up until that moment, however, Kutcher had for years been the very recognisable face of a campaign to rid the Internet of CSAM, a role that involved considerable access to the top brass in Brussels.

Thorn’s declarations to the EU transparency register lists meetings with senior members of the cabinets of top Commission officials with a say in the bloc’s security or digital policy, including Johansson, antitrust czar Margrethe Vestager, Commission Vice-President Margaritis Schinas, and internal market commissioner Thierry Breton.

In November 2020, it was the turn of Commission President Ursula von der Leyen, who was part of a video conference with Kutcher and an organisation registered in the small Dutch town of Lisse – the WeProtect Global Alliance.

Though registered in the EU lobby database as a charity, Thorn sells its AI tools on the market for a profit; since 2018, the US Department of Homeland Security, for example, has purchased software licences from Thorn for a total of $4.3 million.

These tools are used by companies such as Vimeo, Flickr and OpenAI – the creator of chatbot ChatGPT and one of many beneficiaries of Kutcher’s IT investments – and by law enforcement agencies across the globe.

In November 2022, Kutcher and Johansson lined up as key speakers at a summit organised and moderated by then European Parliament Vice President Eva Kaili, who three weeks later was arrested and deposed over an investigation into the ‘Qatargate’ cash-for-lobbying scandal.

In March this year, six months before his resignation amid uproar over his letter of support for Masterson, Kutcher addressed lawmakers in Brussels, seeking to appease concerns about the possible misuse and shortcomings of the existing technology. Technology can scan for suspicious material without violating privacy, he said, a claim that the European Digital Rights association said was “deeply misleading”.

The Commission has been reluctant to detail the relationship between Thorn and Johansson’s cabinet under the EU’s freedom of information mechanism. It refused to disclose Cordua’s emailed response to Johansson’s May 2022 letter or a ‘policy one pager’ Thorn had shared with her cabinet, citing Thorn’s position that “the disclosure of the information contained therein would undermine the organisation’s commercial interest”.

After seven months of communication concerning access to documents and the intervention of the European Ombudsman, in early September the Commission finally released a series of email exchanges between Johansson’s Directorate-General for Migration and Home Affairs and Thorn.

The emails reveal a continuous and close working relationship between the two sides in the months following the roll out of the CSAM proposal, with the Commission repeatedly facilitating Thorn’s access to crucial decision-making venues attended by ministers and representatives of EU member states.

The European Ombudsman is looking into the Commission’s refusal to grant access to a host of other internal documents pertaining to Johansson’s proposal.

FGS Global, a major lobbying firm hired by Thorn and paid at least 600,000 euros in 2022 alone, said Thorn would not comment for this story. Johansson also did not respond to an interview request.

Enter ‘WeProtect Global Alliance’


Photo: Courtesy of Solomon.

Among the few traces of Thorn’s activities in the EU’s lobby transparency register is a contribution of 219,000 euros in 2021 to the WeProtect Global Alliance, the organisation that had a video conference with Kutcher and Von der Leyen in late 2020.

WeProtect is the offspring of two governmental initiatives – one co-founded by the Commission and the United States, the other by Britain.

They merged in 2016 and, in April 2020, as momentum built for legislation to CSAM with client-side scanning technology, WeProtect was transformed from a British government-funded entity into a putatively independent ‘foundation’ registered at a residential address in Lisse, on the Dutch North Sea coast.

Its membership includes powerful security agencies, a host of governments, Big Tech managers, NGOs, and one of Johansson’s most senior cabinet officials, Antonio Labrador Jimenez, who heads the Commission’s team tasked with fighting CSAM.

Minutes after the proposed regulation was unveiled in May last year, Labrador Jimenez emailed his Commission colleagues: “The EU does not accept that children cannot be protected and become casualties of policies that put any other values or rights above their protection, whatever these may be.”

He said he was looking forward to “seeing many of you in Brussels during the WeProtect Global Alliance summit” the following month.

Labrador Jimenez officially joined the WeProtect Policy Board in July 2020, after the Commission decided to join and fund it as “the central organisation for coordinating and streamlining global efforts and regulatory improvements” in the fight against CSAM. WeProtect public documents, however, show Labrador Jimenez participating in WeProtect board meetings in December 2019.

Commenting on this story, the Commission said Labrador Jimenez “does not receive any kind of compensation for his participation in the WeProtect Global Alliance Management Board, and performs this function as part of his duties at the Commission”.

Labrador Jimenez’s position on the WeProtect Board, however, raises questions about how the Commission uses its participation in the organisation to promote Johannson’s proposal.

When Labrador Jimenez briefed fellow WeProtect Board members about the proposed regulation in July 2022, notes from the meeting show that “the Board discussed the media strategy of the legislation”.

Labrador Jimenez has also played a central role in drafting and promoting Johansson’s regulation, the same proposal that WeProtect is actively campaigning for with EU funding. And next to him on the board sits Thorn’s Julie Cordua, as well as government officials from the US and Britain [the latter currently pursuing its own Online Safety Bill], Interpol, and United Arab Emirates colonel, Dana Humaid Al Marzouqi, who chairs or participates in numerous international police task forces.

Between 2020 and 2023, Johansson’s Directorate-General awarded almost 1 million euros to WeProtect to organise the June 2022 summit in Brussels, which was dedicated to the fight against CSAM and activities to enhance law enforcement collaboration.

WeProtect did not reply directly to questions concerning its funding arrangements with the Commission or to what extent its advocacy strategies are shaped by the governments and stakeholders sitting on its policy board.

In a statement, it said it is led “by a multi-stakeholder Global Policy Board; members include representatives from countries, international and civil society organisations, and the technology industry.”

The financing


Photo: Courtesy of Solomon.

Another member of the WeProtect board alongside Labrador Jimenez is Douglas Griffiths, a former official of the US State Department and currently president of the Geneva-based Oak Foundation, a group of philanthropic organisations around the world providing grants “to make the world a safer, fairer, and more sustainable place to live”.

Oak Foundation has provided WeProtect with “generous support for strategic communications”, according to WeProtect financial statements from 2021.

From Oak Foundation’s annual financial reports, it is clear it has a long-term commitment to aiding NGOs tackling child abuse. It is also funding the closely linked network of civil society organisations and lobby groups promoting Johansson’s proposed regulation, many of which have helped build an umbrella entity called the European Child Sexual Abuse Legislation Advocacy Group, ECLAG.

ECLAG, which launched its website a few weeks after Johansson’s proposal was announced in May 2022, acts as a coordination platform for some of the most active organisations lobbying in favour of the CSAM legislation. Its steering committee includes Thorn and a host of well-known children’s rights organisations such as ECPAT, Eurochild, Missing Children Europe, Internet Watch Foundation, and Terre des Hommes.

Another member is Brave Movement, which came into being in April 2022, a month before’s Johansson’s regulation was rolled out, thanks to a $10.3 million contribution by the Oak Foundation to Together for Girls, a US-based non-profit that fights sexual violence against children.

Oak Foundation has also given to Thorn – $5 million in 2019. In 2020, it gave $250,000 to ECPAT to engage “policy makers to include children’s interests in revisions to the Digital Services Act and on the impact of end-to-end encryption” and a further $100,000 in support of efforts to end “the online child sexual abuse and exploitation of children in the digital space”. The same year it authorised a $990,000 grant to Eurochild, another NGO coalition that campaigns for children’s rights in Brussels.

In 2021, Oak Foundation gave Thorn a further $250,000 to enhance its coordinating role in Brussels with the aim of ensuring “that any legislative solutions and instruments coming from the EU build on and enhance the existing ecosystem of global actors working to protect children online”.

In 2022, the foundation granted ECPAT a three-year funding package of $2.79 million “to ensure that children’s rights are placed at the centre of digital policy processes in the European Union”. The WeProtect Global Alliance received $2.33 million, also for three years, “to bring together governments, the private sector, civil society, and international organisations to develop policies and solutions that protect children from sexual exploitation and abuse online”.

In a response for this story, Oak Foundation said it does not “advocate for proposed legislation nor work on the details of those policy recommendations”.

It did not respond directly to questions concerning the implications of Johansson’s regulation on privacy rights. A spokesperson said the foundation supports organisations that “advocate for new policies, with a specific focus in the EU, US, and UK, where opportunities exist to establish precedent for other governments”.

Divide and conquer’

Brave Movement’s internal advocacy documents lay out a comprehensive strategy for utilising the voices of abuse survivors to leverage support for Johansson’s proposal in European capitals and, most importantly, within the European Parliament, while targeting prominent critics.

The organisation has enjoyed considerable access to Johansson. In late April 2022, it hosted the Commissioner in an online ‘Global Survivors Action Summit’ – a rare feat in the Brussels bubble for an organisation that was launched just weeks earlier.

An internal strategy document from November 2022 the same year leaves no doubts about the organisation’s role in rallying support for Johansson’s proposal.

“The main objective of the Brave Movement mobilisation around this proposed legislation is to see it passed and implemented throughout the EU,” it states.

“If this legislation is adopted, it will create a positive precedent for other countries… which we will invite to follow through with similar legislation.”

In April this year, the Brave Movement held an ‘Action Day’ outside the European Parliament, where a group of survivors of online child sexual abuse were gathered “to demand EU leaders be brave and act to protect millions of children at risk from the violence and trauma they faced”.

Johansson joined the photo-op.

Survivors of such abuse are key to the Brave Movement’s strategy of winning over influential MEPs.

“Once the EU Survivors taskforce is established and we are clear on the mobilised survivors, we will establish a list pairing responsible survivors with MEPs – we will ‘divide and conquer’ the MEPs by deploying in priority survivors from MEPs’ countries of origin,” its advocacy strategy reads.

Conservative Spanish MEP Javier Zarzalejos, the lead negotiator on the issue in the parliament, according to the Brave Movement strategy has called for “strong survivors’ mobilisation in key countries like Germany”.

Brave Movement’s links with the Directorate-General for Migration and Home Affairs goes deeper still: its Europe campaign manager, Jessica Airey, worked on communications for the Directorate-General between October 2022 and February 2023, promoting Johansson’s regulation.

According to her LinkedIn profile, Airey worked “closely with the policy team who developed the [child sexual abuse imagery] legislation in D.4 [where Labrador Jimenez works] and partners like Thorn”.

She also “worked horizontally with MEPs, WeProtect Global Alliance, EPCAT”.

Asked about a possible conflict of interest in Airey’s work for Brave Movement on the same legislative file, the European Commission responded that Airey was appointed as a trainee and so no formal permission was required. It did say, however, that “trainees must maintain strict confidentiality regarding all knowledge acquired during training. Unauthorised disclosure of non-public documents or information is strictly prohibited, with this obligation extending beyond the training period.”

Brave Movement said it is “proud of the diverse alliances we have built and the expert team we have recruited, openly, to achieve our strategic goals”, pointing out that last year alone one online safety hotline received 32 million reports of child sexual abuse content.

Brave Movement has enlisted expert support: its advocacy strategy was drafted by UK consultancy firm Future Advocacy, while its ‘toolkit’, which aims to “build a beating drum of support for comprehensive legislation that protects children” in the EU, was drafted with the involvement of Purpose, a consultancy whose European branch is controlled by French Capgemini SE.

Purpose specialises in designing campaigns for UN agencies and global companies, using “public mobilisation and storytelling” to “shift policies and change public narratives.

Beginning in 2022, the Oak Foundation gave Purpose grants worth $1.9 million to “help make the internet safer for children”.

Since April 2022, Purpose representatives have met regularly with ECLAG – the network of civil society groups and lobbyists – to refine a pan-European communications strategy.

Documents seen by this investigation also show they met with members of Johansson’s team.

A ‘BeBrave Europe Task Force’ meeting in January this year involved the ECLAG steering group, Purpose EU, Justice Initiative and Labrador Jimenez’s unit within the Directorate-General. In 2023 the foundation that launched the Justice Initiative, the Guido Fluri Foundation, received $416,667 from Oak Foundation.

The Commission, according to its own notes of the meeting, “recommended that when speaking with stakeholders of the negotiation, the organisations should not forget to convey a sense of urgency on the need to find an agreement on the legislation this year”.

This coordinated messaging resulted this year in a social media video featuring Johansson, Zarzalejos, and representatives of the organisations behind ECLAG promoting a petition in favour of her regulation.

Disproportionate infringement of rights

Some 200 kilometres north from Brussels, in the Dutch city of Amsterdam, a bright office on the edge of the city’s famous red light district marks the frontline of the fight to identify and remove CSAM in Europe.

‘Offlimits’, previously known as the Online Child Abuse Expertise Agency, or EOKM, is Europe’s oldest hotline for children and adults wanting to report abuse, whether happening behind closed doors or seen on video circulating online.

In 2022, its seven analysts processed 144,000 reports, and 60 per cent concerned illegal content. The hotline sends requests to remove the content to web hosting providers and, if the material is considered particularly serious, to the police and Interpol.

Offlimits director between 2015 and September this year, Arda Gerkens is deeply knowledgeable of EU policy on the matter. Yet unlike the likes of Thorn, she had little luck accessing Johansson.

“I invited her here but she never came,” said Gerkens, a former Socialist Party MP in the Dutch parliament.

“Commissioner Johansson and her staff visited Silicon Valley and big North American companies,” she said. Companies presenting themselves as NGOs but acting more like tech companies have influenced Johansson’s regulation, Gerkens said, arguing that Thorn and groups like it “have a commercial interest”.

Gerkens said that the fight against child abuse must be deeply improved and involve an all-encompassing approach that addresses welfare, education, and the need to protect the privacy of children, along with a “multi-stakeholder approach with the internet sector”.

“Encryption,” she said, “is key to protecting kids as well: predators hack accounts searching for images”.

It’s a position reflected in some of the concerns raised by the Dutch in ongoing negotiations on a compromise text at the EU Council, arguing in favour of a less intrusive approach that protects encrypted communication and addresses only material already identified and designated as CSAM by monitoring groups and authorities.

A Dutch government official, speaking on condition of anonymity, said: “The Netherlands has serious concerns with regard to the current proposals to detect unknown CSAM and address grooming, as current technologies lead to a high number of false positives.”

“The resulting infringement of fundamental rights is not proportionate.”

Self-interest

In June 2022, shortly after the roll out of Johansson’s proposal, Thorn representatives sat down with one of the commissioner’s cabinet staff, Monika Maglione. An internal report of the meeting, obtained for this investigation, notes that Thorn was interested to understand how “bottlenecks in the process that goes from risk assessment to detection order” would be dealt with.

Detection orders are a crucial component of the procedure set out within Johansson’s proposed regulation, determining the number of people to be surveilled and how often.

European Parliament sources say that in technical meetings, Zarzalejos, the rapporteur on the proposal, has argued in favour of detection orders that do not necessarily focus on individuals or groups of suspects, but are calibrated to allow scanning for suspicious content.

This, experts say, would unlock the door to the general monitoring of EU citizens, otherwise known as mass surveillance.

Asked to clarify his position, Zarzalejos’ office responded: “The file is currently being discussed closed-doors among the shadow rapporteurs and we are not making any comments so far”.

In the same meeting with Maglione, Thorn representatives expressed a “willingness to collaborate closely with COM [European Commission] and provide expertise whenever useful, in particular with respect to the creation of the database of indicators to be hosted by the EU Centre” as well as to prepare “communication material on online child sexual abuse”.

The EU Centre to Prevent and Combat Child Sexual Abuse, which would be created under Johansson’s proposal, would play a key role in helping member states and companies implement the legislation; it would also vet and approve scanning technologies, as well as purchase and offer them to small and medium companies.

As a producer of such scanning technologies, a role for Thorn in supporting the capacity building of the EU Centre database would be of significant commercial interest to the company.

Meredith Whittaker, president of Signal Foundation, the US non-for-profit foundation behind the Signal encrypted chat application, says that AI companies that produce scanning systems are effectively promoting themselves as clearing houses and a liability buffer for big tech companies, sensing the market potential.

“The more they frame this as a huge problem in the public discourse and to regulators, the more they incentivise large tech companies to outsource their dealing of the problems to them,” Whittaker said in an interview for this story.

Effectively, such AI firms are offering tech companies a “get out of responsibility free card”, Whittaker said, by telling them, “’You pay us (…) and we will host the hashes, we will maintain the AI system, we will do whatever it is to magically clean up this problem”.

“So it’s very clear that whatever their incorporation status is, that they are self-interested in promoting child exploitation as a problem that happens “online,” and then proposing quick (and profitable) technical solutions as a remedy to what is in reality a deep social and cultural problem. (…) I don’t think governments understand just how expensive and fallible these systems are, that we’re not looking at a one-time cost. We’re looking at hundreds of millions of dollars indefinitely due to the scale that this is being proposed at.”

Lack of scientific input


Photo by Alexas_Fotos/Pixabay

Johansson has dismissed the idea that the approach she advocates will unleash something new or extreme, telling MEPs last year that it was “totally false to say that with a new regulation there will be new possibilities for detection that don’t exist today”.

But experts question the science behind it.

Matthew Daniel Green, a cryptographer and security technologist at John Hopkins University, said there was an evident lack of scientific input into the crafting of her regulation.

“In the first impact assessment of the EU Commission there was almost no outside scientific input and that’s really amazing since Europe has a terrific scientific infrastructure, with the top researchers in cryptography and computer security all over the world,” Green said.

AI-driven scanning technology, he warned, risks exposing digital platforms to malicious attacks and would undermine encryption.

“If you touch upon built-in encryption models, then you introduce vulnerabilities,” he said. “The idea that we are going to be able to have encrypted conversations like ours is totally incompatible with these scanning automated systems, and that’s by design.”

In a blow to the advocates of AI-driven CSAM scanning, US tech giant Apple said in late August that it is impossible to implement CSAM-scanning while preserving the privacy and security of digital communications. The same month, UK officials privately admitted to tech companies that there is no existing technology able to scan end-to-end encrypted messages without undermining users’ privacy.

According to research by Imperial College academics Ana-Maria Cretu and Shubham Jain, published last May, AI driven Client Side Scanning systems could be quietly tweaked to perform facial recognition on user devices without the user’s knowledge. They warned of more vulnerabilities that have yet to be identified.

“Once this technology is rolled out to billions of devices across the world, you can’t take it back”, they said.

Law enforcement agencies are already considering the possibilities it offers.

In July 2022, the head of Johansson’s Directorate-General, Monique Pariat, visited Europol to discuss the contribution the EU police agency could make to the fight against CSAM, in a meeting attended by Europol executive director Catherine de Bolle.

Europol officials floated the idea of using the proposed EU Centre to scan for more than just CSAM, telling the Commission, “There are other crime areas that would benefit from detection”. According to the minutes, a Commission official “signalled understanding for the additional wishes” but “flagged the need to be realistic in terms of what could be expected, given the many sensitivities around the proposal.”

Ross Anderson, professor of Security Engineering at Cambridge University, said the debate around AI-driven scanning for CSAM has overlooked the potential for manipulation by law enforcement agencies.

“The security and intelligence community have always used issues that scare lawmakers, like children and terrorism, to undermine online privacy,” he said.

“We all know how this works, and come the next terrorist attack, no lawmaker will oppose the extension of scanning from child abuse to serious violent and political crimes.”

This investigation was supported by a grant from the IJ4EU fund. It is also published by Die Zeit, Le Monde, De Groene Amsterdammer, Solomon, IRPI Media and El Diario.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now