Greek Media Freedom Hit by Surveillance, Lawsuits and Threats: Report

The initial findings of a report published on Wednesday by eight international media freedom organisations said that press freedom in Greece is under “sustained threat” from the impact of the ‘Predatorgate’ spyware surveillance scandal, abusive lawsuits and physical threats against journalists, as well as economic and political pressures on media.

“While Greece has a small but highly professional group of independent and investigative media doing quality public interest reporting, these outlets remain isolated on the fringes of the media landscape and lack systemic support,” said the International Press Institute’s advocacy officer, Jamie Wiseman, at the launch of the report at the Journalists’ Union of Athens Daily Newspapers.

The report noted how journalists and politicians, among them the leader of the opposition party PASOK were placed under surveillance by the Greek secret services using an illegal spyware called Predator.

It also noted how the 2021 murder of the veteran crime journalist Giorgos Karaivaz remains unresolved.

It said that abusive lawsuits – so-called SLAPPs, Strategic Lawsuits Against Public Participation – and physical attacks against journalists, have been weaponised to silence critical voices by exhausting them financially and psychologically.

“Especially for smaller outlets and freelance journalists, SLAPPs pose an existential threat as often the compensation demanded greatly exceeds their resources, which further exacerbates their intended chilling effect beyond the targeted journalist,” said the report.

The report was produced after a visit to Greece by a delegation composed of the six members of the Media Freedom Rapid Response: ARTICLE 19 Europe, the European Centre for Press and Media Freedom, the European Federation of Journalists, Free Press Unlimited, the International Press Institute and the Osservatorio Balcani e Caucaso Transeuropa. They were joined by representatives of the Committee to Protect Journalists and Reporters Without Borders.

The eight organisations called on the Greek government and prime minister “to show political courage and urgently take specific measures aimed at improving the climate for independent journalism and salvaging press freedom”.

A more detailed report with expanded recommendations will be published in the coming weeks, they said.

The Ethics of Using ChatGPT in Education and Academia

The rapid advancement of artificial intelligence, AI, especially Large Language Models, LLMs, such as ChatGPT, has ushered in a new era of possibilities across various sectors, including education and academia.

ChatGPT, Chat Generative Pre-Trained Transformer, is an AI chatbot model developed and trained by OpenAI, a research organisation focused on advancing AI and launched by them in November 2022. It uses deep learning techniques to generate human-like text based on the input provided.

There is a tendency for younger people to adopt new technologies more readily. AI technology provides a great opportunity, especially for younger students and researchers, to learn and increase their productivity, research output, and quality.

The potential applications of ChatGPT in education are vast, ranging from helping with tests and essays to providing personalised tutoring. This technology can better meet students’ learning needs, improving their efficiency and grades. ChatGPT can also help teachers plan lessons and grade papers and stimulate student interests.

ChatGPT has become an attractive tool for various applications in the academic world. It can generate ideas and hypotheses for research papers, create outlines, summarise papers, draft entire articles, and help with editing.

These capabilities significantly reduce the time and effort required to produce academic work, potentially accelerating the pace of scientific discovery or overcoming writer’s block, a common problem many academics face.

ChatGPT, and LLMs like it, can assist researchers in various tasks, including data analysis, literature reviews, and writing research papers. One of the most significant advantages of using ChatGPT in academic research is its ability to analyse large amounts of data quickly. These tools can process texts with extraordinary success and often in a way that is indistinguishable from the human output.

The limitations of these LLMs, such as their brittleness [susceptibility to catastrophic failure], unreliability [false or made-up information], and the occasional inability to make elementary logical inferences or deal with simple mathematics, represent a decoupling of agency and intelligence.

But is ChatGPT a replacement for human authorship and critical thinking, or is it merely a helpful tool?


Photo by EPA/RITCHIE B. TONGO

Plagiarism, copyright, and integrity

While ChatGPT has the potential to revolutionise the way we approach education and research, its use in these fields brings many ethical issues and challenges that need to be considered. These concern plagiarism, copyright, and the integrity of academic work. ChatGPT can produce medium-quality essays within minutes, blurring the lines between original thought and automated generation.

First and foremost is the issue of plagiarism, as the model may generate text identical or similar to existing text. Plagiarism is copying someone else’s work, or simply rephrasing what was written without personally adding anything.

Since ChatGPT generates text based on a vast amount of data from the Internet, there is a risk that the tool may inadvertently produce text that closely resembles existing work. Students may be tempted to use verbatim in their work text produced by ChatGPT.

This raises questions about the originality of the work produced using ChatGPT and whether it constitutes plagiarism. It is difficult to ascertain the extent of the contribution made by the AI tool versus the human researcher, which further complicates the issue of authorship, credit, and intellectual property. A related concern is that using ChatGPT may lessen critical thinking and creativity.

Plagiarism, however, predates AI, as Serbia knows. Several cases involving public officials have come to light in recent years, before ChatGPT, including plagiarism of a PhD thesis that was copied from other people’s work.

Another ethical concern relates to copyright infringement. If ChatGPT generates text that closely resembles existing copyrighted material, using such text in an academic article could potentially violate copyright laws.

Using ChatGPT or similar LLMs becomes both a moral and legal issue. The need for legislation specifically regulating the use of Generative AI represents a significant challenge for its application in practice.

Using text-generating tools in scholarly writing presents challenges to transparency and credibility. Universities, journals, and institutes must revise their policies on acceptable tools.


Photo by EPA/RITCHIE B. TONGO

To ban or not to ban?

Given the concerns raised by academicians globally, many schools and universities have banned ChatGPT, although students use it anyway. Additionally, some advocate for carefully using it and not banning ChatGPT, but teaching with it because cheating using different tools is inevitable.

Further, significant questions about copyright have emerged, especially given the broad application of ChatGPT in academic spheres, content creation, and its use by students for completing academic tasks. The questions are: Who holds the intellectual property rights for the content produced by ChatGPT, and who would be liable for copyright violation?

Many educational institutions have already prohibited the use of ChatGPT, while prominent publishers such as Elsevier and Cambridge University Press authorise chatbots for academic writing. However, the guidelines for using AI in science still need to be provided.

AI tools such as ChatGPT in academic research are currently a matter of debate among journal editors, researchers, and publishers. There is an ongoing discussion about whether citing ChatGPT as an author in published literature is appropriate.

It is also essential for academic institutions and publishers to establish guidelines and policies for using AI-generated text in academic research. Governments and relevant agencies should develop corresponding laws and regulations to protect students’ privacy and rights, ensuring that the application of AI technology complies with educational ethics and moral standards.

The legislative procedure in the EU is still ongoing, and there are estimates that it will take years before the regulations begin to be implemented in practice. Legislation that would regulate the application of ChatGPT in practice, especially in academia in Serbia, also does not exist.

Recently, researchers have been caught copying and pasting text directly from ChatGPT, forgetting to remove its phrase ‘As an AI language model…’ and publishing peer-reviewed papers in prominent scientific journals.

For example, in the European International Journal of Pedagogics, in a paper titled Quinazolinone: Pharmacophore with Endless Pharmacological Actions, the authors in the section Methods pasted ChatGPT’s answer “as an AI language model; I don’t have access to the full text of the article…”

This has also been the case in some PhD and MA theses.


Photo by EPA-EFE/RONALD WITTEK

Need for guidelines

Emerging in response to the challenge of plagiarism are AI-text detectors, software specifically designed to detect content generated by AI tools.

To address these concerns regarding plagiarism, some scientific publishers, such as Springer Nature and Elsevier, have established guidelines to promote the ethical and transparent use of LLMs. These guidelines advise against crediting LLMs as authors on research papers since AI tools cannot take responsibility for the work. Some guidelines call for the use of LLMs to be documented in their papers’ methods or acknowledgments sections.

To prevent plagiarism using ChatGPT or other AI language models, it is necessary to educate students on plagiarism, what it is, and why it is wrong; to use plagiarism detection tools; and to set clear guidelines for the use of ChatGPT and other resources.

To ensure accountable use of this AI model, it is essential to establish guidelines, filters, and rules to prevent users’ misuse of unethical language.

Despite the concerns mentioned above, before discussing whether AI tools such as ChatGPT should be academically banned, it is necessary to examine the challenges currently faced by education, and the significant impact and benefits of using ChatGPT in education.

The application of ChatGPT raises various legal and ethical dilemmas. What we need are guidelines, policy and regulatory recommendations, and best practices for students, researchers, and higher education institutions.

A tech-first approach, relying solely on AI detectors, can lead to potential pitfalls. Mere reliance on technological solutions can inadvertently create an environment of suspicion and surveillance and shift the focus from fostering a culture of integrity to one of surveillance and punishment – the importance of establishing a culture wherein academic honesty is valued intrinsically and not just enforced extrinsically.

Striking the right balance between leveraging the benefits of ChatGPT and maintaining the integrity of the research process will be vital to navigating the ethical minefield associated with using AI tools in academia.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she deals with topics of ethics of AI, Applied and Normative Ethics, Bioethics.

‘Who Benefits?’ Inside the EU’s Fight over Scanning for Child Sex Content

In early May 2022, days before she launched one of the most contentious legislative proposals Brussels had seen in years, the European Union’s home affairs commissioner, Ylva Johansson, sent a letter to a US organisation co-founded in 2012 by the movie stars Ashton Kutcher and Demi Moore.

The organisation, Thorn, develops artificial intelligence tools to scan for child sexual abuse images online, and Johansson’s proposed regulation is designed to fight the spread of such content on messaging apps.

“We have shared many moments on the journey to this proposal,” the Swedish politician wrote, according to a copy of the letter addressed to Thorn executive director Julie Cordua and which BIRN has seen.

Johansson urged Cordua to continue the campaign to get it passed: “Now I am looking to you to help make sure that this launch is a successful one.”

That campaign faces a major test in October when Johansson’s proposal is put to a vote in the Civil Liberties Committee of the European Parliament. It has already been the subject of heated debate.

The regulation would obligate digital platforms – from Facebook to Telegram, Signal to Snapchat, TikTok to clouds and online gaming websites – to detect and report any trace of child sexual abuse material, CSAM, on their systems and in their users’ private chats.

It would introduce a complex legal architecture reliant on AI tools for detecting images, videos and speech – so-called ‘client-side scanning’ – containing sexual abuse against minors and attempts to groom children.

Welcomed by some child welfare organisations, the regulation has nevertheless been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications from prying eyes.

The EU’s top data protection watchdog, Wojciech Wiewiorowski, warned Johansson about the risks in 2020, when she informed him of her plans.

They amount to “crossing the Rubicon” in terms of the mass surveillance of EU citizens, he said in an interview for this story. It “would fundamentally change the internet and digital communication as we know it.”

Johansson, however, has not blinked. “The privacy advocates sound very loud,” the commissioner said in a speech in November 2021. “But someone must also speak for the children.”

Based on dozens of interviews, leaked documents and insight into the Commission’s internal deliberations, this investigation connects the dots between the key actors bankrolling and organising the advocacy campaign in favour of Johansson’s proposal and their direct links with the commissioner and her cabinet.

It’s a synthesis that granted certain stakeholders, AI firms and advocacy groups – which enjoy significant financial backing – a questionable level of influence over the crafting of EU policy.

The proposed regulation is excessively “influenced by companies pretending to be NGOs but acting more like tech companies”, said Arda Gerkens, former director of Europe’s oldest hotline for reporting online CSAM.

“Groups like Thorn use everything they can to put this legislation forward, not just because they feel that this is the way forward to combat child sexual abuse, but also because they have a commercial interest in doing so.”

If the regulation undermines encryption, it risks introducing new vulnerabilities, critics argue. “Who will benefit from the legislation?” Gerkens asked. “Not the children.”

Privacy assurances ‘deeply misleading’


The Action Day promoted by Brave Movement in front of the EP. Photo: Justice Initiative

Star of That ‘70s Show and a host of Hollywood hits, 45-year-old Kutcher resigned as chairman of the Thorn board in mid-September amid uproar over a letter he wrote to a judge in support of convicted rapist and fellow That ‘70s Show actor Danny Masterson, prior to his sentencing.

Up until that moment, however, Kutcher had for years been the very recognisable face of a campaign to rid the Internet of CSAM, a role that involved considerable access to the top brass in Brussels.

Thorn’s declarations to the EU transparency register lists meetings with senior members of the cabinets of top Commission officials with a say in the bloc’s security or digital policy, including Johansson, antitrust czar Margrethe Vestager, Commission Vice-President Margaritis Schinas, and internal market commissioner Thierry Breton.

In November 2020, it was the turn of Commission President Ursula von der Leyen, who was part of a video conference with Kutcher and an organisation registered in the small Dutch town of Lisse – the WeProtect Global Alliance.

Though registered in the EU lobby database as a charity, Thorn sells its AI tools on the market for a profit; since 2018, the US Department of Homeland Security, for example, has purchased software licences from Thorn for a total of $4.3 million.

These tools are used by companies such as Vimeo, Flickr and OpenAI – the creator of chatbot ChatGPT and one of many beneficiaries of Kutcher’s IT investments – and by law enforcement agencies across the globe.

In November 2022, Kutcher and Johansson lined up as key speakers at a summit organised and moderated by then European Parliament Vice President Eva Kaili, who three weeks later was arrested and deposed over an investigation into the ‘Qatargate’ cash-for-lobbying scandal.

In March this year, six months before his resignation amid uproar over his letter of support for Masterson, Kutcher addressed lawmakers in Brussels, seeking to appease concerns about the possible misuse and shortcomings of the existing technology. Technology can scan for suspicious material without violating privacy, he said, a claim that the European Digital Rights association said was “deeply misleading”.

The Commission has been reluctant to detail the relationship between Thorn and Johansson’s cabinet under the EU’s freedom of information mechanism. It refused to disclose Cordua’s emailed response to Johansson’s May 2022 letter or a ‘policy one pager’ Thorn had shared with her cabinet, citing Thorn’s position that “the disclosure of the information contained therein would undermine the organisation’s commercial interest”.

After seven months of communication concerning access to documents and the intervention of the European Ombudsman, in early September the Commission finally released a series of email exchanges between Johansson’s Directorate-General for Migration and Home Affairs and Thorn.

The emails reveal a continuous and close working relationship between the two sides in the months following the roll out of the CSAM proposal, with the Commission repeatedly facilitating Thorn’s access to crucial decision-making venues attended by ministers and representatives of EU member states.

The European Ombudsman is looking into the Commission’s refusal to grant access to a host of other internal documents pertaining to Johansson’s proposal.

FGS Global, a major lobbying firm hired by Thorn and paid at least 600,000 euros in 2022 alone, said Thorn would not comment for this story. Johansson also did not respond to an interview request.

Enter ‘WeProtect Global Alliance’


Photo: Courtesy of Solomon.

Among the few traces of Thorn’s activities in the EU’s lobby transparency register is a contribution of 219,000 euros in 2021 to the WeProtect Global Alliance, the organisation that had a video conference with Kutcher and Von der Leyen in late 2020.

WeProtect is the offspring of two governmental initiatives – one co-founded by the Commission and the United States, the other by Britain.

They merged in 2016 and, in April 2020, as momentum built for legislation to CSAM with client-side scanning technology, WeProtect was transformed from a British government-funded entity into a putatively independent ‘foundation’ registered at a residential address in Lisse, on the Dutch North Sea coast.

Its membership includes powerful security agencies, a host of governments, Big Tech managers, NGOs, and one of Johansson’s most senior cabinet officials, Antonio Labrador Jimenez, who heads the Commission’s team tasked with fighting CSAM.

Minutes after the proposed regulation was unveiled in May last year, Labrador Jimenez emailed his Commission colleagues: “The EU does not accept that children cannot be protected and become casualties of policies that put any other values or rights above their protection, whatever these may be.”

He said he was looking forward to “seeing many of you in Brussels during the WeProtect Global Alliance summit” the following month.

Labrador Jimenez officially joined the WeProtect Policy Board in July 2020, after the Commission decided to join and fund it as “the central organisation for coordinating and streamlining global efforts and regulatory improvements” in the fight against CSAM. WeProtect public documents, however, show Labrador Jimenez participating in WeProtect board meetings in December 2019.

Commenting on this story, the Commission said Labrador Jimenez “does not receive any kind of compensation for his participation in the WeProtect Global Alliance Management Board, and performs this function as part of his duties at the Commission”.

Labrador Jimenez’s position on the WeProtect Board, however, raises questions about how the Commission uses its participation in the organisation to promote Johannson’s proposal.

When Labrador Jimenez briefed fellow WeProtect Board members about the proposed regulation in July 2022, notes from the meeting show that “the Board discussed the media strategy of the legislation”.

Labrador Jimenez has also played a central role in drafting and promoting Johansson’s regulation, the same proposal that WeProtect is actively campaigning for with EU funding. And next to him on the board sits Thorn’s Julie Cordua, as well as government officials from the US and Britain [the latter currently pursuing its own Online Safety Bill], Interpol, and United Arab Emirates colonel, Dana Humaid Al Marzouqi, who chairs or participates in numerous international police task forces.

Between 2020 and 2023, Johansson’s Directorate-General awarded almost 1 million euros to WeProtect to organise the June 2022 summit in Brussels, which was dedicated to the fight against CSAM and activities to enhance law enforcement collaboration.

WeProtect did not reply directly to questions concerning its funding arrangements with the Commission or to what extent its advocacy strategies are shaped by the governments and stakeholders sitting on its policy board.

In a statement, it said it is led “by a multi-stakeholder Global Policy Board; members include representatives from countries, international and civil society organisations, and the technology industry.”

The financing


Photo: Courtesy of Solomon.

Another member of the WeProtect board alongside Labrador Jimenez is Douglas Griffiths, a former official of the US State Department and currently president of the Geneva-based Oak Foundation, a group of philanthropic organisations around the world providing grants “to make the world a safer, fairer, and more sustainable place to live”.

Oak Foundation has provided WeProtect with “generous support for strategic communications”, according to WeProtect financial statements from 2021.

From Oak Foundation’s annual financial reports, it is clear it has a long-term commitment to aiding NGOs tackling child abuse. It is also funding the closely linked network of civil society organisations and lobby groups promoting Johansson’s proposed regulation, many of which have helped build an umbrella entity called the European Child Sexual Abuse Legislation Advocacy Group, ECLAG.

ECLAG, which launched its website a few weeks after Johansson’s proposal was announced in May 2022, acts as a coordination platform for some of the most active organisations lobbying in favour of the CSAM legislation. Its steering committee includes Thorn and a host of well-known children’s rights organisations such as ECPAT, Eurochild, Missing Children Europe, Internet Watch Foundation, and Terre des Hommes.

Another member is Brave Movement, which came into being in April 2022, a month before’s Johansson’s regulation was rolled out, thanks to a $10.3 million contribution by the Oak Foundation to Together for Girls, a US-based non-profit that fights sexual violence against children.

Oak Foundation has also given to Thorn – $5 million in 2019. In 2020, it gave $250,000 to ECPAT to engage “policy makers to include children’s interests in revisions to the Digital Services Act and on the impact of end-to-end encryption” and a further $100,000 in support of efforts to end “the online child sexual abuse and exploitation of children in the digital space”. The same year it authorised a $990,000 grant to Eurochild, another NGO coalition that campaigns for children’s rights in Brussels.

In 2021, Oak Foundation gave Thorn a further $250,000 to enhance its coordinating role in Brussels with the aim of ensuring “that any legislative solutions and instruments coming from the EU build on and enhance the existing ecosystem of global actors working to protect children online”.

In 2022, the foundation granted ECPAT a three-year funding package of $2.79 million “to ensure that children’s rights are placed at the centre of digital policy processes in the European Union”. The WeProtect Global Alliance received $2.33 million, also for three years, “to bring together governments, the private sector, civil society, and international organisations to develop policies and solutions that protect children from sexual exploitation and abuse online”.

In a response for this story, Oak Foundation said it does not “advocate for proposed legislation nor work on the details of those policy recommendations”.

It did not respond directly to questions concerning the implications of Johansson’s regulation on privacy rights. A spokesperson said the foundation supports organisations that “advocate for new policies, with a specific focus in the EU, US, and UK, where opportunities exist to establish precedent for other governments”.

Divide and conquer’

Brave Movement’s internal advocacy documents lay out a comprehensive strategy for utilising the voices of abuse survivors to leverage support for Johansson’s proposal in European capitals and, most importantly, within the European Parliament, while targeting prominent critics.

The organisation has enjoyed considerable access to Johansson. In late April 2022, it hosted the Commissioner in an online ‘Global Survivors Action Summit’ – a rare feat in the Brussels bubble for an organisation that was launched just weeks earlier.

An internal strategy document from November 2022 the same year leaves no doubts about the organisation’s role in rallying support for Johansson’s proposal.

“The main objective of the Brave Movement mobilisation around this proposed legislation is to see it passed and implemented throughout the EU,” it states.

“If this legislation is adopted, it will create a positive precedent for other countries… which we will invite to follow through with similar legislation.”

In April this year, the Brave Movement held an ‘Action Day’ outside the European Parliament, where a group of survivors of online child sexual abuse were gathered “to demand EU leaders be brave and act to protect millions of children at risk from the violence and trauma they faced”.

Johansson joined the photo-op.

Survivors of such abuse are key to the Brave Movement’s strategy of winning over influential MEPs.

“Once the EU Survivors taskforce is established and we are clear on the mobilised survivors, we will establish a list pairing responsible survivors with MEPs – we will ‘divide and conquer’ the MEPs by deploying in priority survivors from MEPs’ countries of origin,” its advocacy strategy reads.

Conservative Spanish MEP Javier Zarzalejos, the lead negotiator on the issue in the parliament, according to the Brave Movement strategy has called for “strong survivors’ mobilisation in key countries like Germany”.

Brave Movement’s links with the Directorate-General for Migration and Home Affairs goes deeper still: its Europe campaign manager, Jessica Airey, worked on communications for the Directorate-General between October 2022 and February 2023, promoting Johansson’s regulation.

According to her LinkedIn profile, Airey worked “closely with the policy team who developed the [child sexual abuse imagery] legislation in D.4 [where Labrador Jimenez works] and partners like Thorn”.

She also “worked horizontally with MEPs, WeProtect Global Alliance, EPCAT”.

Asked about a possible conflict of interest in Airey’s work for Brave Movement on the same legislative file, the European Commission responded that Airey was appointed as a trainee and so no formal permission was required. It did say, however, that “trainees must maintain strict confidentiality regarding all knowledge acquired during training. Unauthorised disclosure of non-public documents or information is strictly prohibited, with this obligation extending beyond the training period.”

Brave Movement said it is “proud of the diverse alliances we have built and the expert team we have recruited, openly, to achieve our strategic goals”, pointing out that last year alone one online safety hotline received 32 million reports of child sexual abuse content.

Brave Movement has enlisted expert support: its advocacy strategy was drafted by UK consultancy firm Future Advocacy, while its ‘toolkit’, which aims to “build a beating drum of support for comprehensive legislation that protects children” in the EU, was drafted with the involvement of Purpose, a consultancy whose European branch is controlled by French Capgemini SE.

Purpose specialises in designing campaigns for UN agencies and global companies, using “public mobilisation and storytelling” to “shift policies and change public narratives.

Beginning in 2022, the Oak Foundation gave Purpose grants worth $1.9 million to “help make the internet safer for children”.

Since April 2022, Purpose representatives have met regularly with ECLAG – the network of civil society groups and lobbyists – to refine a pan-European communications strategy.

Documents seen by this investigation also show they met with members of Johansson’s team.

A ‘BeBrave Europe Task Force’ meeting in January this year involved the ECLAG steering group, Purpose EU, Justice Initiative and Labrador Jimenez’s unit within the Directorate-General. In 2023 the foundation that launched the Justice Initiative, the Guido Fluri Foundation, received $416,667 from Oak Foundation.

The Commission, according to its own notes of the meeting, “recommended that when speaking with stakeholders of the negotiation, the organisations should not forget to convey a sense of urgency on the need to find an agreement on the legislation this year”.

This coordinated messaging resulted this year in a social media video featuring Johansson, Zarzalejos, and representatives of the organisations behind ECLAG promoting a petition in favour of her regulation.

Disproportionate infringement of rights

Some 200 kilometres north from Brussels, in the Dutch city of Amsterdam, a bright office on the edge of the city’s famous red light district marks the frontline of the fight to identify and remove CSAM in Europe.

‘Offlimits’, previously known as the Online Child Abuse Expertise Agency, or EOKM, is Europe’s oldest hotline for children and adults wanting to report abuse, whether happening behind closed doors or seen on video circulating online.

In 2022, its seven analysts processed 144,000 reports, and 60 per cent concerned illegal content. The hotline sends requests to remove the content to web hosting providers and, if the material is considered particularly serious, to the police and Interpol.

Offlimits director between 2015 and September this year, Arda Gerkens is deeply knowledgeable of EU policy on the matter. Yet unlike the likes of Thorn, she had little luck accessing Johansson.

“I invited her here but she never came,” said Gerkens, a former Socialist Party MP in the Dutch parliament.

“Commissioner Johansson and her staff visited Silicon Valley and big North American companies,” she said. Companies presenting themselves as NGOs but acting more like tech companies have influenced Johansson’s regulation, Gerkens said, arguing that Thorn and groups like it “have a commercial interest”.

Gerkens said that the fight against child abuse must be deeply improved and involve an all-encompassing approach that addresses welfare, education, and the need to protect the privacy of children, along with a “multi-stakeholder approach with the internet sector”.

“Encryption,” she said, “is key to protecting kids as well: predators hack accounts searching for images”.

It’s a position reflected in some of the concerns raised by the Dutch in ongoing negotiations on a compromise text at the EU Council, arguing in favour of a less intrusive approach that protects encrypted communication and addresses only material already identified and designated as CSAM by monitoring groups and authorities.

A Dutch government official, speaking on condition of anonymity, said: “The Netherlands has serious concerns with regard to the current proposals to detect unknown CSAM and address grooming, as current technologies lead to a high number of false positives.”

“The resulting infringement of fundamental rights is not proportionate.”

Self-interest

In June 2022, shortly after the roll out of Johansson’s proposal, Thorn representatives sat down with one of the commissioner’s cabinet staff, Monika Maglione. An internal report of the meeting, obtained for this investigation, notes that Thorn was interested to understand how “bottlenecks in the process that goes from risk assessment to detection order” would be dealt with.

Detection orders are a crucial component of the procedure set out within Johansson’s proposed regulation, determining the number of people to be surveilled and how often.

European Parliament sources say that in technical meetings, Zarzalejos, the rapporteur on the proposal, has argued in favour of detection orders that do not necessarily focus on individuals or groups of suspects, but are calibrated to allow scanning for suspicious content.

This, experts say, would unlock the door to the general monitoring of EU citizens, otherwise known as mass surveillance.

Asked to clarify his position, Zarzalejos’ office responded: “The file is currently being discussed closed-doors among the shadow rapporteurs and we are not making any comments so far”.

In the same meeting with Maglione, Thorn representatives expressed a “willingness to collaborate closely with COM [European Commission] and provide expertise whenever useful, in particular with respect to the creation of the database of indicators to be hosted by the EU Centre” as well as to prepare “communication material on online child sexual abuse”.

The EU Centre to Prevent and Combat Child Sexual Abuse, which would be created under Johansson’s proposal, would play a key role in helping member states and companies implement the legislation; it would also vet and approve scanning technologies, as well as purchase and offer them to small and medium companies.

As a producer of such scanning technologies, a role for Thorn in supporting the capacity building of the EU Centre database would be of significant commercial interest to the company.

Meredith Whittaker, president of Signal Foundation, the US non-for-profit foundation behind the Signal encrypted chat application, says that AI companies that produce scanning systems are effectively promoting themselves as clearing houses and a liability buffer for big tech companies, sensing the market potential.

“The more they frame this as a huge problem in the public discourse and to regulators, the more they incentivise large tech companies to outsource their dealing of the problems to them,” Whittaker said in an interview for this story.

Effectively, such AI firms are offering tech companies a “get out of responsibility free card”, Whittaker said, by telling them, “’You pay us (…) and we will host the hashes, we will maintain the AI system, we will do whatever it is to magically clean up this problem”.

“So it’s very clear that whatever their incorporation status is, that they are self-interested in promoting child exploitation as a problem that happens “online,” and then proposing quick (and profitable) technical solutions as a remedy to what is in reality a deep social and cultural problem. (…) I don’t think governments understand just how expensive and fallible these systems are, that we’re not looking at a one-time cost. We’re looking at hundreds of millions of dollars indefinitely due to the scale that this is being proposed at.”

Lack of scientific input


Photo by Alexas_Fotos/Pixabay

Johansson has dismissed the idea that the approach she advocates will unleash something new or extreme, telling MEPs last year that it was “totally false to say that with a new regulation there will be new possibilities for detection that don’t exist today”.

But experts question the science behind it.

Matthew Daniel Green, a cryptographer and security technologist at John Hopkins University, said there was an evident lack of scientific input into the crafting of her regulation.

“In the first impact assessment of the EU Commission there was almost no outside scientific input and that’s really amazing since Europe has a terrific scientific infrastructure, with the top researchers in cryptography and computer security all over the world,” Green said.

AI-driven scanning technology, he warned, risks exposing digital platforms to malicious attacks and would undermine encryption.

“If you touch upon built-in encryption models, then you introduce vulnerabilities,” he said. “The idea that we are going to be able to have encrypted conversations like ours is totally incompatible with these scanning automated systems, and that’s by design.”

In a blow to the advocates of AI-driven CSAM scanning, US tech giant Apple said in late August that it is impossible to implement CSAM-scanning while preserving the privacy and security of digital communications. The same month, UK officials privately admitted to tech companies that there is no existing technology able to scan end-to-end encrypted messages without undermining users’ privacy.

According to research by Imperial College academics Ana-Maria Cretu and Shubham Jain, published last May, AI driven Client Side Scanning systems could be quietly tweaked to perform facial recognition on user devices without the user’s knowledge. They warned of more vulnerabilities that have yet to be identified.

“Once this technology is rolled out to billions of devices across the world, you can’t take it back”, they said.

Law enforcement agencies are already considering the possibilities it offers.

In July 2022, the head of Johansson’s Directorate-General, Monique Pariat, visited Europol to discuss the contribution the EU police agency could make to the fight against CSAM, in a meeting attended by Europol executive director Catherine de Bolle.

Europol officials floated the idea of using the proposed EU Centre to scan for more than just CSAM, telling the Commission, “There are other crime areas that would benefit from detection”. According to the minutes, a Commission official “signalled understanding for the additional wishes” but “flagged the need to be realistic in terms of what could be expected, given the many sensitivities around the proposal.”

Ross Anderson, professor of Security Engineering at Cambridge University, said the debate around AI-driven scanning for CSAM has overlooked the potential for manipulation by law enforcement agencies.

“The security and intelligence community have always used issues that scare lawmakers, like children and terrorism, to undermine online privacy,” he said.

“We all know how this works, and come the next terrorist attack, no lawmaker will oppose the extension of scanning from child abuse to serious violent and political crimes.”

This investigation was supported by a grant from the IJ4EU fund. It is also published by Die Zeit, Le Monde, De Groene Amsterdammer, Solomon, IRPI Media and El Diario.

Serbia’s ‘Trust Deficit’ in Management of AI Needs Addressing

In a world increasingly driven by artificial intelligence, AI, public trust in the institutions developing and implementing this transformative technology is paramount.

From smart cities to personalized healthcare, AI has the potential to revolutionize every aspect of our lives. However, its rapid development raises ethical and societal issues. In Serbia, as in many other parts of the world, there is a palpable tension between the promise of AI and the public’s trust in the institutions at the forefront of its development and application.

A recent study has delved into the public’s trust in various actors involved in the management and development of AI for the public’s best interest. The results reveal the need for a comprehensive approach to address the trust deficit and foster a more inclusive and transparent AI ecosystem.

On the one hand there is discussion in the scientific community about the application of AI. On the other, there is a gap regarding public attitudes about this use. In addition to the main ethical issues associated with the use of AI, there are also issues of people’s awareness and their knowledge about it, as well as trust not only in AI but in those who develop and apply it. All of this is necessary to implement AI as successfully as possible.

Concern about AI fed by distrust in institutions generally


Photo by EPA-EFE/SASCHA STEINBACH

In Serbia, public distrust extends beyond the realm of AI and encompasses a broader skepticism towards political actors and the situation in the country in general. A history of political instability, corruption and a lack of transparency in decision-making processes has fueled this distrust. The erosion of democratic norms, which were weak to begin with, has contributed to a pervasive distrust in public institutions and political actors.

Moreover, the Serbian government has faced criticism for handling crises such as the COVID-19 pandemic and its perceived lack of accountability. These factors have compounded the mistrust and skepticism toward the government and other political actors. One example of damaged trust is the misreporting of COVID-19 deaths originally reported by a BIRN report in the first year of the pandemic. This was later confirmed by the study published in Annals of Epidemiology that shows the number of deaths was more than threefold higher.

Consequently, the Serbian public has a general sense of disillusionment, which extends to their perception of AI development and implementation. In the context of AI, this broader distrust manifests itself in skepticism toward the government’s ability to manage and develop AI that serves the public’s best interests. The results of the recent study reflect this sentiment, revealing a significant trust deficit in the government, the Ministry of Interior and the Ministry of Justice. It indicates a preference for non-governmental and international actors over national and governmental institutions in the development and management of AI.

US public also has misgivings


Photo by Pixabay

The research “US Public Opinion on the Governance of AI” (Zhang and Dafoe, 2020) showed that American citizens showed low trust in government organizations, corporate institutions and decision-makers who should develop and apply AI in the public interest.

Regarding trust in actors to develop and manage AI, university researchers, and the US military are the most trusted groups to develop AI: The US public trusts university researchers (50 per cent, a fair or a great deal of confidence) and the US military (49 per cent). Americans express slightly less confidence in tech companies, non-profit organizations (e.g. OpenAI), and US intelligence organizations. They rate Facebook as the least trustworthy of all the actors.

Generally, the US public expresses greater confidence in non-governmental organizations than in governmental ones. Some 41 per cent of Americans express a great or fair amount of confidence in tech companies.

Individual-level trust in various actors to responsibly develop and manage AI does not predict one’s general support for developing AI Institutional distrust and does not predict opposition to AI development.

New Ipsos polling finds the similar results regarding American mistrust in government or companies.

Serbs trust their government least of all


Photo by EPA/RALF HIRSCHBERGER

In Serbia, a study has been conducted within the research project “Ethics and AI: Ethics and Public Attitudes towards the Use of Artificial Intelligence” from 2021-2022.

The study aimed to examine various aspects related to the use of AI, among which was the trust in different institutions to develop and use AI in the public’s best interest. The following were questioned: the government, the army, the RS Ministry of Interior, the Ministry of Health, the Ministry of Justice, public companies and local governments, researchers from universities, international research organizations (such as CERN), technology companies (e.g. Google, Facebook, Apple, Microsoft), and non-profit AI research organizations (like OpenAI).

The results showed that the public has the least trust in the government of Serbia, followed by the Ministry of Interior and the Ministry of Justice, while the most trust is in researchers from the University and international research organizations (such as CERN). It was shown that even 70 per cent of respondents have no confidence in the government, and 66.4 per cent in the Ministry of Interior of the RS.

From a hierarchical point of view, respondents have the most trust in researchers from the university, international research organizations, non-profit organizations for AI research, and technology companies, then significantly less in the Ministry of Health, the army, public enterprises, and local governments, the RS Ministry of Justice, the Ministry of Internal Affairs and the government.

Addressing the ‘trust deficit’ requires more transparency


Serbian President Aleksandar Vucic (L) talks during the Bled Strategic Forum in 2020 focused on cybersecurity, digitalisation and European security. Photo by EPA-EFE/IGOR KUPLJENIK

Addressing the trust deficit in Serbia requires a multifaceted approach beyond AI and addresses the root causes of distrust in political actors and public institutions. Firstly, there is a need for greater transparency in decision-making processes across all levels of government. This can be achieved by implementing open government initiatives, public consultations, and increased access to information. Secondly, efforts must be made to tackle corruption and strengthen the rule of law. This includes implementing comprehensive anti-corruption measures, enhancing judicial independence, and ensuring accountability for public officials.

Lastly, in the context of AI, involving the public in the decision-making processes related to developing and implementing this technology is crucial. This can be achieved through public consultations, citizen assemblies, and other forms of participatory decision-making. Additionally, there must be a concerted effort to educate the public about AI’s potential benefits and challenges and to address their concerns transparently and openly.

In a world increasingly driven by AI, trust forms the bedrock of a socially and ethically responsible approach to harnessing the potential of this transformative technology. By addressing the trust deficit in Serbia and fostering a more inclusive and transparent approach to the development and implementation of AI, we can build a stronger and more resilient society better equipped to navigate the challenges and opportunities of the AI era.

What needs to be achieved regarding this topic is educating the public about artificial intelligence and increasing transparency and dialogue between the public, experts, the private sector, and the state. How can this be accomplished? The first step is education. Secondly, more transparency is also needed in decision-making processes related to the implementation of AI. It is necessary to act in two directions: to educate the general public in Serbia and to increase transparency and dialogue between the public on the one hand and the state and the private sector on the other.

 Understanding the nuances of public trust in this context is essential for developing policies and practices that are ethically sound, socially acceptable, and successful in harnessing the potential of AI for the betterment of society.

The trust deficit in Serbia, particularly in institutions developing and implementing AI, is a symptom of a broader crisis of confidence in political actors and public institutions. As AI continues to transform every aspect of our lives, it is crucial to address this trust deficit to ensure that the development and implementation of this technology are ethically sound, socially acceptable, and, ultimately, successful.

Marina Budic is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she conducted a funded research project, Ethics, and AI: Ethics and public attitudes towards the use of artificial intelligence in Serbia.

 

The opinions expressed are those of the author only and do not necessarily reflect the views of BIRN.

Turkish Courts Remove 201 Online Content Items on Convicted Fraudster

A convicted Turkish fraudster, Yasam Ayavefe, whose questionable acquisition of Greek citizenship was reported by BIRN, managed to get 201 online content items in Turkey removed under three court orders, the Turkish Media Law Studies Association MLSA’s Free Web Turkey project revealed.

The removed content includes news articles, social media posts and even the official website content and social media posts of the Turkish Police.

An investigation by BIRN and its Greek media partner Solomon in September 2022 revealed that the Turkish businessman acquired honorary Greek citizenship in June 2021 via his political ties.

“Ayavefe’s decision to block access is not a surprise. People who face allegations of corruption, irregularity or any other illegality, immediately apply to the court to block access to news and social media posts containing those allegations,” Ali Safa Korkut, Project Coordinator at the MLSA, explained.

Korkut told BIRN that under Turkish laws it is easy to get online content removed.

“The courts do not show any care when evaluating these access ban applications. In many access-blocking decisions, the URLs for which access-blocking is requested are not checked at all by the judges. Access-blocking decisions are made even for content that has nothing to do with the person requesting access-blocking,” Korkut said.

However, he underlined that Ayavefe’s case is different to most others.

“What is surprising about the decision taken regarding Ayavefe is that the decision was made to block access even for the content shared on the official website and social media accounts of the Turkish Police, stating that Ayavefe was caught by Interpol,” Korkut added.

Some of the news content that Ayavefe managed to get removed with court orders. Photo/Illustration: Media and Law Studies Association, MLSA.

The MLSA report also noted Ayavefe’s request to BIRN to delete several pieces of content.

In July, Bener Ljutviovski, a representative of Ayavefe, asked BIRN to delete its report about him. He also urged BIRN to delete articles about cyberattacks that targeted the Balkan Insight website after the publication of the investigation.

Ljutviovski insisted that Ayavefe had “nothing to do with these accusations” and was being accused “without any proof” of being connected to the DDoS attacks on BIRN’s Balkan Insight website and Solomon’s site after the publication of the investigation.

He called on BIRN to take down the articles in line with Turkish court rulings even though one of those judgments clearly stated that domestic courts in Turkey cannot remove online content of “foreign origin”.

He also appeared to offer BIRN financial incentives in return for compliance: “My client Dr Yasam Ayavefe has an advertising company, if you help us in this case we can provide advertising service to your organisation, so you can grow to bigger organisation. We would love to cooperate with you,” he wrote.

BIRN declined Ljutviovski’s offer and rejected his repeated demands to remove the articles about Ayavefe.

Surveillance States: Monitoring of Journalists Goes Unchecked in Central, South-East Europe

“Oh my God, I was monitored, what did they have access to?”

This is what Hungarian investigative journalist Szabolcs Panyi thought to himself back in 2020, when he and his colleagues were working on a report about a leak of tens of thousands of phone numbers of people who were under surveillance, using high-tech Pegasus spyware that infects mobile phones.

One of the numbers on the Pegasus list was his own. Panyi found himself in the unusual situation of being one of the victims in the story he was investigating.

“I didn’t have much time to think about it, because I had to work on the story and I had to talk to others who were in a similar situation. So, it actually helped me in processing the whole thing, because I could see that it wasn’t [just] directed against me, I was just part of a much bigger picture,” he told BIRN.

This bigger picture is made clear by BIRN’s survey of 15 countries in Central and South-East Europe, which identified 28 cases of surveillance of individual journalists or larger numbers of journalists over the past decade. It’s not clear how many more remain undiscovered and how much active surveillance is still ongoing.

Based on interviews with journalists who were targeted and research into other cases, BIRN was able to establish that:

  • In the vast majority of cases, states are proven or suspected to be behind the surveillance.
  • Surveillance operations do not only target high-profile investigative journalists covering organised crime.
  • Despite new spyware like Pegasus that targets mobile phones, ‘traditional’ types of surveillance, such as wiretapping or physical monitoring, are still the most popular methods being used.
  • In almost two-thirds of cases, police or prosecutor’s offices have opened official investigations into the surveillance.
  • None of the cases has resulted in a court verdict or in anyone being held accountable.

Investigators and judges are currently dealing with ongoing cases of surveillance against journalists in Greece, Moldova, Slovakia and North Macedonia. In North Macedonia, the country’s former secret police chief is awaiting retrial for the illegal wiretapping of thousands of people, among them journalists.

The spy in the phone


Hungarian journalist Szabolcs Panyi speaking to BIRN about how he was monitored. Photo: BIRN.

The UK newspaper The Guardian has described Pegasus as “perhaps the most powerful piece of spyware ever developed – certainly by a private company”.

Manufactured by Israeli cybersecurity firm NSO, it can be installed on mobile phones by exploiting their vulnerabilities and can then record or harvest messages, photographs, videos and calls.

An international investigation led by a non-profit media organisation, Forbidden Stories, established in 2021 that it has been used against more than 50,000 phone numbers in more than 20 countries around the world.

The data showed that at least 180 journalists have been targeted, in countries including France, Morocco, Mexico and India and Hungary.

Panyi, who works for Hungarian media outlets Direkt36 and VSquare, which had access to the Forbidden Stories data, said that, of these 50,000 phone numbers, there were “more than 300 numbers that we suspected the Hungarian user [of Pegasus], which we later learned was the Special Service for National Security, had selected for surveillance”.

Panyi also uncovered the fact that Hungary had spent at least 6 million euros of taxpayers’ money on procuring Pegasus spyware in 2017-18.

He noted that because Pegasus is so invasive, “this type of surveillance violates someone’s privacy rights in a way that can only be justified if someone committed a very serious crime, or there is a very strong suspicion of this type of crime”.

“And what we have seen is that there are dozens of people who appear in this list with their phone number who have not been prosecuted in any way, who are not involved in any suspicious activity,” he said.

The Pegasus spyware was able to exploit vulnerabilities in mobile phones’ operating systems, installing surveillance applications and programs that turned journalists’ phones into listening devices.

Pegasus and other spyware programs, such as Predator, can infect devices via so-called ‘zero-click’ attacks, which can infect phones without the user’s involvement. They can also install malware through links sent to phone users. One such link was sent by text message to the Greek investigative journalist Thanasis Koukakis in July 2021.

“Thanasis, do you know about this issue?” the message said, containing a link that, according to Koukakis, suggested that it was “a piece of banking news”.

“I clicked on the link and at that moment I was basically infected with Predator, and they had access to my mobile until September 24, 2021, for two-and-a-half months,” he told BIRN.

Predator is a type of spyware similar to Pegasus but created by a company called Cytrox in North Macedonia.

Koukakis started to suspect something was not right when he noticed that his phone was overheating and its battery was “dying too fast”. He started to check with his sources and eventually got in touch with Citizen Lab, an internet watchdog organisation at the University of Toronto, which helped him prove that he was under surveillance.

The use of spyware may be more widespread than has been documented. In December 2020, Citizen Lab reported that Serbia’s Security Information Agency, BIA, uses software from the Israeli company Circles – part of the NSO Group that produced Pegasus – which enables the user to locate every mobile phone in the country in seconds.

So far, it has not been confirmed that the software, which Citizen Lab believes over 20 other countries have acquired, has been used to target journalists.

‘Traditional methods’: cameras and bugs


Photo illustration: Unsplash/Denny Müller.

Although the use of digital surveillance has increased over the years, ‘traditional’ methods are still more commonly used to monitor journalists, according to BIRN’s findings.

State authorities trying to find out about journalists’ sources or uncover compromising material mostly use wiretapping and physical surveillance: bugging phones and apartments and following people on the streets.

These methods are sometimes used on their own but more often as a part of a package of surveillance measures in which journalists are simultaneously followed and their conversations are recorded, or their devices monitored.

In February 2020, Romanian journalist Alexandru Costache, who works at the country’s public broadcaster, TVR, spent an evening at home socialising with friends who included journalists and judicial officials. The next morning, Costache got a call.

“A friend called me: ‘Turn on the TV, turn on RTV [Româniă TV], look what’s on RTV.’ It was us,” Costache told BIRN.

A subsequent investigation established that 11 people were involved in recording Costache and his friends’ conversation, mainly in the room where they met, “but also in the hallway towards the toilet, we were even filmed and photographed inside the toilet, as well as outside, in the yard, on the street”.

“They followed me until I got home, because I live nearby,” he added. The people carrying out the surveillance were posing as journalists for an online media outlet.

The Bucharest Military Prosecutor’s Office launched an investigation but the culprits were not identified and the case was dropped earlier this year. The specific reason for the surveillance remains unknown.

The use of spyware has been documented in several European Union member states, including Greece, Hungary and Poland.

Ricardo Gutierrez, general secretary of the European Federation of Journalists, EFJ, told BIRN that he hopes that the EU will ultimately take action to curb the activities of member states that spy on journalists, by “not allowing governments to use Predator, Pegasus and that kind of tool against anyone, not just journalists”.

“It’s very important to understand that if you allow such surveillance of journalists, then it means that everybody can be under surveillance tomorrow,” Gutierrez said.

Old tactics persist in post-Communist states


Serbian journalist Stevan Dojcinovic. Photo: BIRN.

The majority of the Central and South-East European countries surveyed by BIRN were formerly ruled by Communist regimes that used surveillance widely. During the post-Communist transition period, the security services in some of these countries were not fully reformed.

In Serbia, after the fall of authoritarian President Slobodan Milosevic in 2000, the security apparatus continued to use many of the same methods, including using compromising material obtained by subterfuge.

“Sado-masochistic French spy!” was just one of the front-page headlines in the government-affiliated tabloid Informer targeting Serbian investigative journalist Stevan Dojčinović in 2016.

Dojčinović said the tabloid’s sensationalist report was a part of a campaign against him that was launched when his investigative media outlet KRIK (Mreža za istraživanje kriminala i korupcije) investigated a property owned by then Prime Minister Aleksandar Vučić, who is now the country’s president.

The headline referred to explicit photographs published by Informer that showed Dojčinović participating in a private ‘suspension bondage’ event attended by seven or eight people.

Dojčinović said that the Serbian intelligence service, BIA, thought it could “destroy” him with the publication of the explicit photographs, showing that it “still has that Communist old-school brain” that relies on targeting people’s private lives to exert pressure.

He decided to bring a case against Informer for breach of privacy – and his lawsuit revealed that the BIA was behind the smear operation.

“When I sued [the tabloid], in the response to the lawsuit their lawyer wrote that everything I said wasn’t true, and then he said that all the information that was published [by Informer] was correct and that in order to confirm the information, the court should address the BIA,” Dojčinović recalled.

“That way, they essentially gave me the first bit of material that I could get in order to initiate some further processes against the BIA,” he added.

Dojčinović brought a complaint to Serbia’s Ombudsman. The case is still in progress.

‘The scale of monitoring was enormous’


Alena Zsuzsova at a preliminary hearing at the Judicial Academy building in Pezinok, Slovakia in December 2019, before her trial over the murder of journalist Jan Kuciak and his fiancee Martina Kusnirova. Photo: EPA-EFE/JAKUB GAVLAK.

In the digital era, surveillance isn’t only carried out by intelligence services, domestic and foreign, but by companies and criminal organisations, too.

One of the most notorious cases of surveillance of journalists in Europe in recent years was instigated by private individuals, and culminated in the killing of a reporter.

Marian Kočner, a powerful Slovak businessman with political ambitions, recruited a team of ‘spy commandos’ – a group of former police and intelligence agents, to spy on various people, including eight prominent journalists, to uncover their “dirty secrets”. This could then be used to blackmail or embarrass them via an online outlet he had set up called Na Pranieri, according to witnesses at a subsequent trial.

One of the journalists targeted for surveillance was Ján Kuciak, who regularly wrote critical investigative stories about Kočner’s business dealings for the Slovak news website Aktuality.

Kuciak and his architect fiancée, Martina Kusnirova, were then shot dead on February 21, 2018, at their home in Veľka Maca, some 65 kilometres from the capital, Bratislava.

The previous year, Kočner had threatened the reporter in a phone call, but although Kuciak had reported the threat, the authorities declined to investigate or call the businessman for questioning.

A Slovak court in May this year acquitted Kočner, after a retrial, of ordering the murders of Kuciak and Kusnirova.

“The court is convinced that the accused Kočner was worried about the journalist Ján Kuciak, especially at a time when he wanted to enter politics… [but] we cannot convict someone solely based on the motive,” judge Ruzena Sabova said in the original trial. Kočner’s associate, Alena Zsuzsova, was found guilty of ordering the killings.

As regards the ‘spy commando’ surveillance team, Slovak police in March 2019 initiated criminal procedures against ‘unknown perpetrators’ who had monitored the journalists and other people from early 2017 to May 2018, according to the Slovak Spectator. No indictment has yet been issued in the case.

But Laura Kellöová, an investigative journalist who continued to work on Kuciak’s reports at Aktuality, after he was murdered, said she was shocked at the intensity of the surveillance: “The scale of the monitoring of journalists and the illegal extraction of information about them from police databases was enormous.”

“I thought that this kind of thing had ended in Slovakia after the end of Communism,” Kellöová said.

‘It’s not part of the job’


Greek journalist Thanasis Koukakis. Photo: BIRN.

The majority of the journalists who spoke to BIRN about their experiences of being monitored said they were more worried about whether the surveillance could have affected their contacts or revealed details of stories they were investigating than about the impact on their own personal lives.

“What concerns me very much is how much my sources have been affected, the people with whom I was in contact during the period I was under surveillance,” said Thanasis Koukakis, financial editor at CNN Greece and Newsbomb.gr, who was monitored by the Greek National Intelligence Service in 2020, and with Predator spyware a year later.

Koukakis said that, because of the surveillance, some of his sources in the Greek finance ministry and banking system were moved from sensitive positions without explanation. “In retrospect, of course, it all makes sense,” he said.

EFJ general secretary Gutierrez said that journalists often display two problematic tendencies when it comes to surveillance. One is thinking that they’re not covering “important issues”, so they would never be put under surveillance; the other is thinking that being monitored by the authorities is just part of the job.

“No, it’s not part of the job to be under surveillance, it’s not part of the job to be arrested and it’s not part of the job to be intimidated,” Gutierrez asserted.

Police or prosecutor’s offices launched official investigations into the incidents of surveillance in under two-thirds of the 28 cases analysed by BIRN.

None of these cases has resulted in a court verdict so far. In almost half the cases in which an investigation was launched, the probe is still ongoing.

One such unresolved case is the mass wiretapping uncovered in North Macedonia in 2015, which allegedly involved senior police officials. The secret police chief at the time, Saso Mijalkov, was convicted of involvement but in 2021, the appeals court overturned the verdict and Mijalkov is awaiting a retrial.

According to the charges, between 2008 and 2015, when the country’s authoritarian Prime Minister Nikola Gruevski was in power, the defendants illegally tapped more than 4,200 telephone numbers without obtaining court orders. Many of those who were bugged were journalists.

One of those whose phone was tapped was the prominent journalist Vasko Popetrevski, editor-in-chief of ‘360 Degrees’, a television show and website.

After an investigation started, after Gruevski’s government was ousted, Popetrevski found out that “literally day and night for seven-and-a-half years, all my phones were being listened to 24 hours a day”.

He is hoping that Mijalkov will be convicted under a final verdict before the statute of limitations expires in his case in 2025, and that the victims of surveillance will receive compensation – “as a message that this should not and must not happen again”.

‘Abusive use of national security’


Photo illustration: Unsplash/Camilo Jimenez.

Some journalists who have been tapped, bugged and followed told BIRN that they now take greater precautions to avoid being monitored, particularly considering the increasing sophistication of electronic surveillance.

Koukakis said he uses encrypted apps on his phone and tries to meet contacts face-to-face, if he can.

“And, of course, my cellphone is checked regularly to see whether it is free of spyware or not,” he added.

Bartosz Węglarczyk, editor-in-chief of ONET, Poland’s largest online news platform, who was put under surveillance in the late 2000s, said his media company employs digital security professionals to protect journalists from surveillance, working under the supposition that it could happen again at any time.

“We know it’s happening. So that’s how we act,” Weglarczyk said.

At Aktuality in Slovakia, Laura Kellöová expressed a similar view: “I always have to assume that someone might be listening in or watching me and has an interest in finding out who I’m communicating with, what I’m working on, who I’m talking to and who my sources are.”

Kellöová said that after the murder of Ján Kuciak, Aktuality’s journalists “communicate almost exclusively through encrypted applications, not through regular phone lines”.

But some European countries have considered expanding their security services’ surveillance powers since the start of Russia’s full-scale invasion of Ukraine, in reaction to perceived threats from Russian espionage.

In Poland, a draft law on electronic communications intended to give the security services access to any material sent or received by email or other online communication tools is awaiting approval from parliament.

Simultaneously, the European Union is exploring the possibility of restricting digital surveillance with the proposed European Media Freedom Act, the first ever regulation of media freedom at EU level, which is currently in the process of adoption.

However, based on the draft legislation and the European Council’s position on the law, it appears that this opportunity to curb digital surveillance is likely to be missed and the situation may actually become worse.

Gutierrez explained that, following intervention by France, the draft legislation has been changed to allow surveillance of journalists, if there is a “need for the state to ensure national security”.

The EFJ is worried about the possible “abusive use of this concept of national security to impose surveillance, to spy on journalists”, he said.

“It’s a kind of legalisation of spying on journalists for any reason because, you know, anything can be interpreted as a national security issue; it’s vague, so it’s easy for states to try to justify that kind of thing,” he warned.

Many of the journalists and experts who spoke to BIRN were convinced that, despite the cases that have been exposed and the subsequent legal challenges, the monitoring of reporters continues across Central and South-East Europe.

Saška Cvetkovska, editor-in-chief of the Investigative Reporting Lab in North Macedonia, noted that despite ongoing prosecutions of former officials for the mass surveillance scheme that was uncovered in 2015, material apparently obtained by wiretapping continues to be published.

“Daily, non-stop, from 2015 to today, unauthorised recorded conversations of presidents of political parties, businessmen, MPs and journalists have been appearing on various [online] platforms and in the media,” Cvetkovska said.

Hungarian investigative reporter Szabolcs Panyi, who discovered that he was being monitored with Pegasus spyware, said he also believes that surveillance is more widespread than so far revealed – and that the way the authorities and the general public have responded to reports that journalists have been monitored is also concerning.

“The whole reaction of the Hungarian state doesn’t show that they take the privacy, legal and human rights concerns that come up in this kind of surveillance seriously, but try to sweep it all off the table as a political scandal,” Panyi said.

“And the very, very sad thing is that I don’t see that Hungarian public opinion, Hungarian society itself, is showing any particular resistance when confronted with information that the Hungarian state can monitor anyone at any time.”

The interviews for this article were conducted by Claudia Ciobanu, Katarína Kozinková, Delia Marinescu, Sinisa Jakov Marušić, Eleni Stamatoukou, Milica Stojanović and Zita Szopkó.

See all the interviews for BIRN’s Surveillance States project, which examines the monitoring of journalists in 15 countries in Central and South-East Europe, on our special focus page.

Men Only: Kosovo’s Public Broadcaster Snubs High-Scoring Women for Top Posts

When the Kosovo Assembly, dominated by the ruling Vetëvendosje party, elected the board of Radio Television of Kosovo, RTK, in July 2021, it promised that the public broadcaster would undergo profound reform, leading to its complete reorganization.

A BIRN investigation shows that this reorganization was accompanied by controversial appointment processes that prevented three women from getting the top management positions that they had been selected for.

Longtime RTK workers Ilire Zajmi, Flora Durmishi and Mihrije Bejiqi, two journalists and a lawyer, scored best in three different recruitment processes for leading positions – but the posts were instead taken by men who’d received lower scores in the evaluations of the recruiting commissions.

Men currently occupy the top positions in RTK, including board chairman, general director, director of television, director of radio and administrative director.

The broadcaster denies allegations of gender discrimination, stating that meritocracy prevails and that professional women are continuously supported.


RTK. Photo: BIRN

Job contest deemed ‘tainted’ after she came first

This was not the first time Ilire Zajmi encountered recruitment problems.

At the end of 2022, after 15 years running the Center for Professional Development at RTK, Zajmi applied for the position of Deputy General Director, the second most important position in the RTK management hierarchy.

She says she was excited by the news that she was rated the best candidate in the recruitment process.

“It was the first time I was applying for a top managerial position, and when I heard the results, I felt that, finally, someone was acknowledging my work,” Zajmi said.

“On December 20, one day after I was placed first in the job interview, Besnik Boletini [chairman of the RTK Board], invited me to a meeting and told me: ‘This job contest was tainted and there were rules violations,’” Zajmi explained.

Arta Avdiu, the chairperson of the panel, told BIRN that she had ranked Zajmi first on the list “based on her experience at RTK and taking into account her proficiency in foreign languages”. Zajmi speaks English, Italian, Turkish, French and Serbian in addition to Albanian.

Zajmi said she insisted on knowing what these so-called violations were, but Boletini did not provide any details.

The meeting took place in the most prominent building in Prishtina, the Radio Kosovo triangle block, whose tall antenna overlooks the capital.

Following that meeting, the interview results were annulled and the process was rerun. Interviews were organised for the second time, by the same commission, because the candidate who came third, Rilind Gërvalla, had filed a complaint.

Boletini, who told BIRN that he does not interfere in RTK competitions and recruitment, does not deny meeting Zajmi, but claims it was a chance meeting.

On the other hand, Zajmi provided BIRN with correspondence showing Boletini had asked for the meeting.

Boletini, however, said that “the recruitment process for these positions is not handled by us on the board”.

“The entire procedure is managed by the management, where selection committees are formed. What we are trying to create, and what I consider progress, is a meritocracy where the most meritorious are chosen,” Boletini added.

When it was made impossible for her to run for RTK’s Deputy General Director, Zajmi turned her attention to Head of Online Media, a position she held as acting director for more than a year.

When the recruitment panel opted for another woman, Zajmi took the case to Labour Inspectorate, which ruled in her favour and fined RTK 1,500 euros, citing several violations during the process.

Zajmi said she is convinced that “women at RTK are not encouraged to be promoted in their careers.

“The opposite happens. Women with professional backgrounds, dignity, work experience and ego are discriminated against and fought against,” Zajmi told BIRN on August 21.

Doarsa Kica Xhelili, LDK MP. Photo: BIRN

Sidelined by male journalist who used homophobic language

Flora Durmishi, who has worked as a radio journalist for more than four decades, says that she was also “stepped over” for the position of Director of Radio at RTK.

She said that right at the start of her application process she received a problematic message that she did not believe initially, but turned out to be accurate. She said that Shkumbin Ahmetxhekaj, the Director General of RTK, had told her: “Flora, the Board doesn’t want you.”

She nevertheless applied for the Director of Radio post that opened at the end of 2022.

Ahmetxhekaj confirmed that a conversation with Durmishi took place but says it was a friendly conversation, “referring to the fact that the board had rejected another of my colleagues whom I considered important for my team”.

Despite coming first in the recruitment process, five out of the 11 board members voted against her.

When the job contest for Director of Radio reopened a few months later, the board chose Arsim Halili, a journalist who had been reprimanded for using homophobic language in 2016 by the Kosovo Press Council, KPC, – comments for which he later apologized.

“After I came first in the contest, I was surprised and disappointed that not only the men [Boletini and Driton Hetemi] but also three women on the RTK Board [Arta Berisha, Deputy Chair of the RTK Board, Albulena Mehmeti, and Fatime Lumi] decided without any justification not to support me,” Durmishi said.

When Kurti’s Vetevendosje won a majority in parliament in February 2021, Doarsa Kica Xhelili, a former MP for the party, was appointed to the panel for selecting RTK board members.

She wanted to impose a 50/50 gender quota during the selection process. One year later, Kica Xhelili switched her political allegiance to the opposition Democratic League of Kosovo, LDK.

Speaking on the latest setbacks to women in RTK, Kica Xhelkili said: “It is unfortunate that RTK Board is not respecting the gender quota by which they were elected themselves.”

“We have not fought that battle for the Board to forget the basic principles on which they were elected, which were meritocracy, complete avoidance of political interference and gender equality,” she told BIRN.

The Pristina-based womens’ rights organization, Kosovo Womens Network, KWN, also criticized RTK.

“As the only public broadcaster, RTK has also an emancipating responsibility to be an example of the respect of law and promotion of gender equality in Kosovo society,” KWN said on September 1, the day BIRN Kosovo published its findings.

Disappointment came fast also for Mihrije Beiqi, a longtime RTK staffer who applied for the post of Head of Common Services, which oversees all administration and is the fourth most important position at RTK.

She was the most-voted candidate in the recruitment process. But when her name was sent to Board for the final vote, its members did not support her candidacy.

Following this rejection, Beiqi filed a complaint at the Pristina Basic Court and is now awaiting the court process.

“They [the Board] deliberately discriminated against me in favour of a man with lower managerial experience, but who came from the government to be employed in the media,” Beiqi said.

Beiqi was referring to Alban Fetahu, who was working in the Ministry of Finance and was later chosen to be head of administration in RTK.

BIRN filed a Freedom of Information request for Fetahu’s CV to confirm his managerial experience but neither RTK, nor Fetahu, responded by time of publication.

RTK General Director Skkumbin Ahmetxhekaj on August 21 denied allegations of gender discrimination during recruitment processes in RTK.

“This year, in five internal job contests for leadership positions, four of them were won by women, all of whom have been part of RTK: Heads of the Legal Office, Marketing, Online Media, and International Relations,” Ahmetxhekaj said.

Arta Avdiu, who was appointed as Acting Director of TV at RTK from September 2022 until March this year, before a man was selected for the position – Rilind Gërvalla – told BIRN that RTK does not encourage women to advance.

“At the last management meeting, around May, when I was present, I made a comment in front of everyone, saying that what we have is ‘macho management’.

“At the start of my tenure as director, General Director Ahmetxhekaj had declared that during his stewardship at the RTK, there would be more women in management. Unfortunately, that did not happen,” Avdiu said.

Serbia’s B92 TV Wins Freedom of Speech Case at European Court

The European Court of Human Rights, ECHR in Strasbourg, ruled on Tuesday in favour of Belgrade-based TV B92 and against Serbia in a case centred on the station’s reporting of allegations of abuse of office by the assistant health minister in 2011.

The ECHR said it found that Serbia had committed a violation of Article 10 (freedom of expression) of the European Convention on Human Rights.

“The court found that, overall, the applicant company [TV B92] had acted in good faith and with the diligence expected of responsible journalism,” the court decision said.

Amid a controversy over the procurement of swine flu vaccines in 2011, the assistant health minister at the time, Zorica Pavlovic, was accused of abuse of office.

TV B92 reported that 12 names, including assistant health minister Pavlovic, had disappeared from a police list of suspects of abuse of office in relation to the controversy, allegedly because of pressure exerted by the Special Prosecutor on the Interior Ministry.

The reporting was based on an investigation by a team of B92 journalists from the ‘Insajder’ TV show, and in particular on a note obtained from two police officers that had been drawn up by a division of the Fight Against Organised Financial Crime Department.

In April 2012, assistant minister Pavlovic, who was named in the note, instituted civil proceedings against B92.

Serbian courts found that B92’s TV broadcasts and online articles had damaged Pavlovic’s reputation, and ordered it to pay 1,750 euros in non-pecuniary damages and 900 euros for costs. It was also ordered to remove the article from its website and to publish the judgment against it.

All the Serbian courts that dealt with the case, and ultimately the Constitutional Court in 2016, found that B92 had failed to check its facts with due diligence, particularly with regard to the allegation that the criminal complaint against Pavlovic had not been filed because of pressure on the Interior Ministry.

But the ECHR’s verdict said that the courts had “gone too far in their criticism of the applicant company’s fact-checking”.

“The company had based its reporting on a note obtained from police officers about the investigation into the controversy, and there had been no doubts over the note’s credibility. The language used in the reporting had been accurate and not exaggerated, and all the parties had been contacted to obtain their version of events,” the verdict said.

The ECHR said that Serbia also needs to pay the applicant 2,740 euros in pecuniary damages, 2,500 in non-pecuniary damages and 2,400 for costs and expenses.

RTV B92, founded in 1989 as radio station, was a rare source of media resistance to authoritarian Serbian leader Slobodan Milosevic’ nationalist regime in the 1990s.

It was banned by the state and achieved a cult status among its audience. After Milosevic was ousted in 2000, B92 continued its broadcasts, including the well-respected ‘Insajder’ investigative programme.

‘Insajder’ broadcast the story on the procurement of swine flu vaccines as a part of a series of shows in a series called ‘Buying and Selling of Health’.

In September 2015, the Greek ANT1 Group became the majority shareholder of TV B92, and the media outlet’s editorial focus changed.

The creator of the ‘Insajder’ programme, Brankica Stankovic, left B92 in 2015 and started her own production company and website, and has continued to work in investigative journalism.

Balkans Grapples with Escalating Cyberviolence Against Women

Throughout August, an escalation of digital violence was observed across several Balkan countries, including Bosnia and Herzegovina, Hungary, Albania, and Serbia.

Incidents ranged from a shocking live-streamed murder in Bosnia and content moderation challenges on digital platforms to gender-based violations in Hungary and Albania.

Human rights activists in Serbia faced threats and harassment, underscoring the pervasive nature of online abuse in the region.

Digital ‘spectacularization’ of violence in Gradacac murder

In an incident that shook Bosnia, three lives were lost and several individuals were injured in the town of Gradacac on August 11. The horrifying event unfolded when a man live-streamed himself shooting a woman on Instagram. He subsequently claimed to have killed multiple people. The livestream began with chilling words: “You will see what a live murder looks like.” He then shot his woman target dead. He later revealed that he had also targeted a police officer but was unsuccessful in apprehending him. The shooter was identified as Nermin Sulejmanovic.

The incident highlighted the role of social media platforms, specifically Instagram, in broadcasting violence. The gunman used Instagram during his escape, narrating the unfolding events to thousands of viewers. It took several hours for the platform to remove the disturbing content, leaving countless users exposed to the graphic material. The incident’s impact on the mental health of citizens, especially those closely connected to the victims, has sparked concerns among experts.

A spokesperson for Meta conveyed their concern, emphasizing that they are actively collaborating with Bosnia’s authorities to support ongoing investigations. Meta’s spokesperson stated: “We are deeply saddened by the terrible attack in Bosnia, and our thoughts are with the victims and their loved ones. We are in contact with the authorities in Bosnia to help support their investigations.” They further noted: “We will remove any content that glorifies the perpetrator or the attack whenever we become aware of it.”

The role of social media in cases of online feminicide and the propagation of harmful content cannot be underestimated. These platforms have become powerful tools for both both documenting and sensationalizing acts of violence against women. The rapid dissemination of disturbing content across social networks not only has a profound impact on the mental health of viewers but also raises ethical questions about responsible reporting and content moderation.


People attend a peaceful protest march in Sarajevo, Bosnia and Herzegovina, 14 August 2023. Photo: EPA-EFE/FEHIM DEMIR

Content moderation challenges

The proliferation of violent and unsettling content on social media platforms has ignited a debate over the efficacy of content moderation. In a recent development, France’s media oversight body has issued a call to digital platforms, demanding heightened efforts to combat hate speech and violent materials. These platforms must bolster their investments in content moderation, enhance reporting mechanisms and embrace greater transparency to align with evolving European Union digital regulations.

This urgency in addressing online content issues has been magnified in the wake of the Gradacac tragedy, which served as a reminder that different platforms wield varying degrees of effectiveness when confronting such challenges.

TikTok and Telegram, in particular, have found themselves under intense scrutiny due to their perceived shortcomings in content moderation.

Experts argue that platforms like TikTok and Telegram face substantial challenges when it comes to content moderation, often displaying slower response times and less efficient mechanisms compared to industry leaders like Meta. Tijana Cvjetićanin, a media analyst, pointed out that these platforms lack efficient mechanisms for reacting to disturbing materials, citing a past incident where they failed to control the dissemination of content related to a school shooting.

Sections of the live video where Sulejmović discusses the crime he perpetrated continue to persist on TikTok, and the video depicting the murder itself remains largely accessible on the Telegram platform. Shockingly, close to 80,000 individuals viewed the unfiltered recording of the murder within the Telegram group titled “LEVIJATAN – No censorship.” This revelation underscores the challenge of effectively regulating content on platforms like TikTok and Telegram, which appear to take a laxer approach to content moderation than larger social media entities.

The case in Gradacac has exposed not only the weaknesses in content moderation but also the rapid spread of harmful content across the online platforms.

In response to online glorification of the Gradacac shooter, the Interior Ministry in Bosnia’s Federation entity announced an investigation into individuals who glorified the perpetrator’s actions on social media. Ervin Musinovic, from the Federation Interior Ministry, said anyone found to have sent messages in support of the murderer will face criminal investigation.

Online disinformation in the wake of Gradacac tragedy spreads

In the aftermath of Nermin Sulejmanovic’s horrifying live-streamed murder on Instagram, a disturbing trend of digital disinformation has emerged.


An illustration pictures shows a user holding a mobile phone displaying the ‘X’ logo in front of Twitter’s front page, in Los Angeles, California, USA, 27 July 2023. Photo: EPA-EFE/ETIENNE LAURENT

False reports claiming that Sulejmanovic had been secretly buried in a distant location from the crime scene spread across various news websites, amplifying people’s confusion and distress. These unverified claims were denied by Faruk Latifagic, director of the Tuzla Commemorative Center, who confirmed that Sulejmanovic’s body remained in their care.

Adding to the chaos, a call for help circulated on social media, ostensibly aimed at providing support for Nizama Hećimović’s daughter, who had tragically witnessed her mother’s murder live on Instagram. However, the Center for Social Work in Gradacac swiftly denounced this as false and an act of abuse. They emphasized that the child was already under appropriate care and said that any attempt to exploit her identity or share her photo constituted a criminal offence.

Online gender-based violations in Hungary, Albania

In August, online incidents in Hungary and Albania shed light on the alarming prevalence of gender-based violations in the online sphere. These cases also underscore the need for comprehensive measures to protect the rights and dignity of individuals, particularly women and girls, in the digital age.

On August 14, Hungary faced a troubling incident involving a 19-year-old man from Pusztaszer. This individual, posing as a 14-year-old boy on social media platforms, engaged in predatory behaviour targeting underage girls. He lured them into sharing explicit images and videos, all while maintaining a false identity. His cellphone was discovered to contain numerous nude pictures and sexually explicit content involving girls as young as 11.

Two concerned mothers, from Száhahalombatta and Érd, contacted the authorities after discovering that their daughters had received explicit material from this deceptive individual. In addition to coercing the girls to send compromising photos, he reciprocated with explicit images of himself. Investigations revealed that the individual, who had introduced himself as “Bence” and claimed to be 14, was, in fact, an adult named Roland. Subsequent searches of his home unearthed a trove of explicit images and videos featuring girls under 14, some of which were not recent. This indicated that he had been collecting and storing such material for an extended period.

In Albania, on August 30 the online media platform JOQ reported a harrowing case of a girl who fell victim to sexual abuse by her own father. While the news appropriately covered the arrest of the perpetrator, it shockingly revealed the victim’s full name, age, home address, and personal life history. This reckless disclosure added further stigmatization to the young woman’s already traumatic experience, highlighting the importance of responsible reporting and protecting the identities of victims.

Threatening graffiti targets rights activist in Belgrade

In the evolving landscape of digital violence, women across the Balkans are increasingly becoming the focal point of online harassment and abuse. This surge in gender-based digital violence not only emphasizes the need to address this issue but also shines a spotlight on the collective challenges faced by activists, human rights defenders, and individuals striving for progressive causes in the region.


Serbian medical personnel hold a Serbian flag as they protest in front of the soldiers of the NATO-led international peacekeeping Kosovo Force (KFOR) who stand guard in front of the building of the municipality in Zvecan, Kosovo, 31 May 2023. Photo: EPA-EFE/GEORGI LICOVSKI

On August 15, in a distressing incident, the facade of a building in Belgrade’s Borča neighbourhood became a canvas for misogyny and intimidation. The name and surname of Sofija Todorović, the program director of the Youth Initiative for Human Rights, were scrawled in threatening graffiti. Adding insult to injury, a sexist and misogynistic message accompanied her name. Notably, the letter “Z” appeared alongside her name, symbolizing support for Russia’s aggression against Ukraine.

This act, aimed at instilling fear and silencing activism, highlights the plight of individuals who advocate for human rights and progressive causes in Serbia. Todorović’s vocal support for Kosovo’s inclusion in the United Nations had drawn attention and controversy. Her assertion that Serbia had made commitments to facilitate Kosovo’s entry into international organizations had also ignited a contentious debate. In the wake of her advocacy, this incident serves as a reminder of the threats faced by activists in the country.

Bosnia has been covered by Elma Selimovic, Aida Trepanić and Azem Kurtic, Albania by Nensi Bogdani, Hungary by Ákos Keller-Alánt and Serbia by Tijana Uzelac & Kalina Simic.

From Algorithms to Headlines: Ethical Issues in AI-driven Reporting

In the age of the digital revolution, where artificial intelligence, AI, intertwines with our daily lives, a profound ethical dilemma has arisen. This dilemma has shaken the foundations of truth, especially in the realm of media reporting. This specter goes by many names, but we commonly know it as “fake news”.

AI significantly facilitates all aspects of people’s daily and business lives but also brings challenges. Some ethical issues arising from the development and application of AI are alignment, responsibility, bias and discrimination, job loss, data privacy, security, deepfakes, trust, and lack of transparency.

AI has tremendously impacted various sectors and industries, including media and journalism. It has created different tools for automating routine tasks that save time and enhance the accuracy and efficiency of news reporting, content creation, and personalizing content for individual readers, enhancing ad campaigns and marketing strategies.I

At the same time, AI poses enormous ethical challenges, such as privacy and transparency and deepfakes. Lack of transparency leads to biased or inaccurate reporting, undermining public trust in the media. There’s the question of truth: How do we discern fact from fabrication in an age where AI can craft stories so convincingly real? Further, there’s the matter of agency: Are we, as consumers of news, becoming mere pawns in a giant game of AI-driven agendas?

There are several studies examining public perception of these issues. Research done at the University of Delaware finds that most Americans support the development of AI but also favor regulating the technology. Experiences with media and technology are linked to positive views of AI, and messages about the technology shape opinions toward it.

Most Americans are worried that the technology will be used to spread fake and harmful content online (70 per cent). In Serbia a study has been conducted of public attitudes towards AI within the research project Ethics and AI: Ethics and Public Attitudes towards the use of AI.

The results showed that although most respondents have heard of AI, 4 per cent of them do not know anything about AI. Respondents with more knowledge about AI also have more positive attitudes towards its use. It has been shown that people are more informed about AI through the media compared to being informed about this topic through education and profession.

To the statement, “I am afraid that AI will increasingly be used to create fake content (video, audio, photos), and that there is digital manipulation,” 15.2 per cent gave a positive answer, while 62.4 per cent gave a negative response (22.4 per cent are neutral about this question). These results suggest a need to educate the public about potential challenges and ways to prevent them.

Grappling with AI’s Dual Role in Shaping and Skewing News


Illustration: Unsplash.com

According to the Cambridge Dictionary, fake news is defined as false stories that appear to be news spread on the internet or using other media, usually created to influence political views, or as a joke. The Oxford English Dictionary defines fake news as false news stories, often of a sensational nature, designed to be widely shared or distributed to generate revenue or promote or discredit a public figure, political movement, company, etc. Fake news often has propaganda, satire, parody, or manipulation elements.

Other forms of fake news are misleading content, false context, impostor, manipulated, or fabricated content. Fake news has increased on the internet, especially on social media. After the 2016 US elections, fake news dominated the internet. In May this year, posts about the death of the American billionaire George Soros on social media turned out to be fake news.

There is ongoing active research on numerous tactics to combat fake news. Authorities in both autocratic and democratic countries are establishing regulations and legally mandated controls for social media platforms and internet search engines. Google and Facebook introduced new measures to tackle fake news, while the BBC and the UK’s Channel 4 have established fact-checking sites. In Serbia, there is FakeNews Tracker, a portal that searches for inaccurate and manipulative information. The portal is dedicated to the fight against disinformation in media that publish content in the Serbian language.

The mission of the FakeNews Tracker is to encourage the strengthening of media integrity and fact-based journalism. When you see suspicious news, you can report it through the form on their page, after which they check the news. If they find it fake, they publish an analysis. In neighbouring Croatia, a similar fact-checking media organization is Faktograf.

On the individual level, we need to develop critical thinking and be careful when sharing information. Digital media literacy and developing skills to evaluate information critically are essential for anyone searching the internet, especially for young people. Confirmation bias can seriously distort reasoning, particularly in polarised societies.

How AI is Reshaping the Balkan Media Landscape

How does AI shape fake news? AI can be used to generate, filter, and discover fake news. AI’s power to simulate reality, generate human-like texts, and even fabricate audiovisual content has enabled fake news to flourish at an unprecedented rate. There are fake news generators and fake news trackers.

A recent example of the first usage was the news about Serbia ordering 20,000 Shahed drones from Iran, which AI entirely generated. It was then published by some major and credible media outlets. Bosnian media published this news under the headline “Serbia is arming itself”. It turned out that AI made a mistake. Serbia’s Deputy Foreign Minister Aleksić did visit Tehran and met his Iranian counterpart Ali Bagheri. However, there was no information about Serbia ordering Shahed drones. Another example is deepfake, a video of a person whose face or body has been digitally altered to appear to be someone else, typically used maliciously or to spread false information.

Previously, the victims included Donald Trump and Vladimir Putin, and recently, Serbia’s Freedom and Justice Party president, Dragan Đilas. The owner of Serbia’s Pink TV, Željko Mitrović, created a satire with the help of AI technology in which Đilas is a guest on the show Utisak Nedelje and pronounces fictional content generated by deepfake technology. The problem is that the fabricated statements were shown in Pink’s evening news bulletin (Nacionalni dnevnik) without the audience being adequately informed that it was a satirical fabricated speech while it was running. This is an example of misuse of AI.

Announcing a series of legal measures against the owner of Pink, including a lawsuit, Đilas appealed for the new regulation to prohibit the editing of such recordings because they contradict the fundamental guarantees of the European Convention on Human Rights and the Personal Data Protection Act. He also pointed out that this is very dangerous and that the statements of state representatives can be falsified in the same way, endangering the entire country.

AI, with its labyrinthine algorithms and deep learning capabilities, can shape our perceptions more than any propaganda leaflet or radio broadcast of yesteryears.

AI in the media can also detect and filter fake news. Deep learning AI tools are now being used to source and fact-check a story to identify fake news. One example is Google’s Search Algorithm, designed to stop the spread of fake news and hate speech. Websites are fed into an intelligent algorithm to scan the sources and predict the most accurate and trustworthy versions of stories.


Illustration: Unsplash.com

Why should the Balkans care? This region, marked by its tumultuous history, fragile relationships between these countries, and diverse ethnic tapestry, is especially vulnerable. AI-driven disinformation can easily rekindle past animosities or deepen current ones. Recent incidents in Serbia, where AI-generated stories incited unnecessary panic, are poignant reminders. Furthermore, the Balkans, like the rest of the world, face a constant battle over media trust. A single AI-generated yet convincingly real misinformation campaign can erode already waning trust in genuine news outlets.

This debate raises the question: Is freedom of speech more important than the potential for harming fake news and deceptions? I would vote for freedom of speech, but speech that is informed and veridical.

To tackle this, we need strategies:

  1. Enhanced Media Literacy and Education: Educational institutions across Serbia and its neighbours should integrate media literacy into their curricula. As a part of school curricula and community workshops across the Balkans, media literacy can arm the population with the critical thinking tools needed in this digital age. By teaching individuals to critically evaluate sources, question narratives, and understand the basics of AI operations, we’re equipping them with tools to discern the real from the unreal.
  2. Transparent Algorithms: The algorithms behind AI-driven platforms, especially those in the media space, should be transparent. This way, experts and the public can scrutinize and understand the mechanics behind information dissemination.
  3. Ethical AI Development: AI developers in Serbia and globally need to embed ethical considerations into their creations.
  4. Regulatory Mechanisms: While over-regulation can stifle innovation, a balanced approach where AI in media is subjected to ethical guidelines can ensure its positive use.
  5. Collaborative Monitoring: Regional collaboration can create a unified front against fake news. Media outlets across the Balkans can join forces to fact-check, verify sources, and authenticate news, thereby ensuring a cleaner information environment.
  6. Public-Private Partnerships: Tech companies and news agencies can forge alliances to detect and combat fake news. With tech giants with vast resources and advanced AI tools, such partnerships can form the first line of defense against AI-driven misinformation.

It is evident that AI will be shaping the future of media and journalism. The challenges AI poses in media reporting, particularly in the propagation of fake news, are significant but not insurmountable. Finding the proper equilibrium between maximizing AI’s advantages and minimizing its possible dangers is essential. This necessitates continuous dialogue and cooperation among journalists, tech experts, and policymakers.

With a harmonized blend of education, transparency, ethical AI practices, and collaborative efforts, Serbia and the entire Balkan region can navigate their way through the shadows of this digital cave, ensuring that truth remains luminous and inviolable.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she conducted a funded research project, Ethics, and AI: Ethics and public attitudes towards the use of artificial intelligence in Serbia, and presented her project at the AAAI/ACM Conference on AI, Ethics, and Society (AIES) at Oxford University 2022.

The opinions expressed are those of the author only and do not necessarily reflect the views of BIRN.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now