The Ethics of Using ChatGPT in Education and Academia

The rapid advancement of artificial intelligence, AI, especially Large Language Models, LLMs, such as ChatGPT, has ushered in a new era of possibilities across various sectors, including education and academia.

ChatGPT, Chat Generative Pre-Trained Transformer, is an AI chatbot model developed and trained by OpenAI, a research organisation focused on advancing AI and launched by them in November 2022. It uses deep learning techniques to generate human-like text based on the input provided.

There is a tendency for younger people to adopt new technologies more readily. AI technology provides a great opportunity, especially for younger students and researchers, to learn and increase their productivity, research output, and quality.

The potential applications of ChatGPT in education are vast, ranging from helping with tests and essays to providing personalised tutoring. This technology can better meet students’ learning needs, improving their efficiency and grades. ChatGPT can also help teachers plan lessons and grade papers and stimulate student interests.

ChatGPT has become an attractive tool for various applications in the academic world. It can generate ideas and hypotheses for research papers, create outlines, summarise papers, draft entire articles, and help with editing.

These capabilities significantly reduce the time and effort required to produce academic work, potentially accelerating the pace of scientific discovery or overcoming writer’s block, a common problem many academics face.

ChatGPT, and LLMs like it, can assist researchers in various tasks, including data analysis, literature reviews, and writing research papers. One of the most significant advantages of using ChatGPT in academic research is its ability to analyse large amounts of data quickly. These tools can process texts with extraordinary success and often in a way that is indistinguishable from the human output.

The limitations of these LLMs, such as their brittleness [susceptibility to catastrophic failure], unreliability [false or made-up information], and the occasional inability to make elementary logical inferences or deal with simple mathematics, represent a decoupling of agency and intelligence.

But is ChatGPT a replacement for human authorship and critical thinking, or is it merely a helpful tool?


Photo by EPA/RITCHIE B. TONGO

Plagiarism, copyright, and integrity

While ChatGPT has the potential to revolutionise the way we approach education and research, its use in these fields brings many ethical issues and challenges that need to be considered. These concern plagiarism, copyright, and the integrity of academic work. ChatGPT can produce medium-quality essays within minutes, blurring the lines between original thought and automated generation.

First and foremost is the issue of plagiarism, as the model may generate text identical or similar to existing text. Plagiarism is copying someone else’s work, or simply rephrasing what was written without personally adding anything.

Since ChatGPT generates text based on a vast amount of data from the Internet, there is a risk that the tool may inadvertently produce text that closely resembles existing work. Students may be tempted to use verbatim in their work text produced by ChatGPT.

This raises questions about the originality of the work produced using ChatGPT and whether it constitutes plagiarism. It is difficult to ascertain the extent of the contribution made by the AI tool versus the human researcher, which further complicates the issue of authorship, credit, and intellectual property. A related concern is that using ChatGPT may lessen critical thinking and creativity.

Plagiarism, however, predates AI, as Serbia knows. Several cases involving public officials have come to light in recent years, before ChatGPT, including plagiarism of a PhD thesis that was copied from other people’s work.

Another ethical concern relates to copyright infringement. If ChatGPT generates text that closely resembles existing copyrighted material, using such text in an academic article could potentially violate copyright laws.

Using ChatGPT or similar LLMs becomes both a moral and legal issue. The need for legislation specifically regulating the use of Generative AI represents a significant challenge for its application in practice.

Using text-generating tools in scholarly writing presents challenges to transparency and credibility. Universities, journals, and institutes must revise their policies on acceptable tools.


Photo by EPA/RITCHIE B. TONGO

To ban or not to ban?

Given the concerns raised by academicians globally, many schools and universities have banned ChatGPT, although students use it anyway. Additionally, some advocate for carefully using it and not banning ChatGPT, but teaching with it because cheating using different tools is inevitable.

Further, significant questions about copyright have emerged, especially given the broad application of ChatGPT in academic spheres, content creation, and its use by students for completing academic tasks. The questions are: Who holds the intellectual property rights for the content produced by ChatGPT, and who would be liable for copyright violation?

Many educational institutions have already prohibited the use of ChatGPT, while prominent publishers such as Elsevier and Cambridge University Press authorise chatbots for academic writing. However, the guidelines for using AI in science still need to be provided.

AI tools such as ChatGPT in academic research are currently a matter of debate among journal editors, researchers, and publishers. There is an ongoing discussion about whether citing ChatGPT as an author in published literature is appropriate.

It is also essential for academic institutions and publishers to establish guidelines and policies for using AI-generated text in academic research. Governments and relevant agencies should develop corresponding laws and regulations to protect students’ privacy and rights, ensuring that the application of AI technology complies with educational ethics and moral standards.

The legislative procedure in the EU is still ongoing, and there are estimates that it will take years before the regulations begin to be implemented in practice. Legislation that would regulate the application of ChatGPT in practice, especially in academia in Serbia, also does not exist.

Recently, researchers have been caught copying and pasting text directly from ChatGPT, forgetting to remove its phrase ‘As an AI language model…’ and publishing peer-reviewed papers in prominent scientific journals.

For example, in the European International Journal of Pedagogics, in a paper titled Quinazolinone: Pharmacophore with Endless Pharmacological Actions, the authors in the section Methods pasted ChatGPT’s answer “as an AI language model; I don’t have access to the full text of the article…”

This has also been the case in some PhD and MA theses.


Photo by EPA-EFE/RONALD WITTEK

Need for guidelines

Emerging in response to the challenge of plagiarism are AI-text detectors, software specifically designed to detect content generated by AI tools.

To address these concerns regarding plagiarism, some scientific publishers, such as Springer Nature and Elsevier, have established guidelines to promote the ethical and transparent use of LLMs. These guidelines advise against crediting LLMs as authors on research papers since AI tools cannot take responsibility for the work. Some guidelines call for the use of LLMs to be documented in their papers’ methods or acknowledgments sections.

To prevent plagiarism using ChatGPT or other AI language models, it is necessary to educate students on plagiarism, what it is, and why it is wrong; to use plagiarism detection tools; and to set clear guidelines for the use of ChatGPT and other resources.

To ensure accountable use of this AI model, it is essential to establish guidelines, filters, and rules to prevent users’ misuse of unethical language.

Despite the concerns mentioned above, before discussing whether AI tools such as ChatGPT should be academically banned, it is necessary to examine the challenges currently faced by education, and the significant impact and benefits of using ChatGPT in education.

The application of ChatGPT raises various legal and ethical dilemmas. What we need are guidelines, policy and regulatory recommendations, and best practices for students, researchers, and higher education institutions.

A tech-first approach, relying solely on AI detectors, can lead to potential pitfalls. Mere reliance on technological solutions can inadvertently create an environment of suspicion and surveillance and shift the focus from fostering a culture of integrity to one of surveillance and punishment – the importance of establishing a culture wherein academic honesty is valued intrinsically and not just enforced extrinsically.

Striking the right balance between leveraging the benefits of ChatGPT and maintaining the integrity of the research process will be vital to navigating the ethical minefield associated with using AI tools in academia.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she deals with topics of ethics of AI, Applied and Normative Ethics, Bioethics.

From Algorithms to Headlines: Ethical Issues in AI-driven Reporting

In the age of the digital revolution, where artificial intelligence, AI, intertwines with our daily lives, a profound ethical dilemma has arisen. This dilemma has shaken the foundations of truth, especially in the realm of media reporting. This specter goes by many names, but we commonly know it as “fake news”.

AI significantly facilitates all aspects of people’s daily and business lives but also brings challenges. Some ethical issues arising from the development and application of AI are alignment, responsibility, bias and discrimination, job loss, data privacy, security, deepfakes, trust, and lack of transparency.

AI has tremendously impacted various sectors and industries, including media and journalism. It has created different tools for automating routine tasks that save time and enhance the accuracy and efficiency of news reporting, content creation, and personalizing content for individual readers, enhancing ad campaigns and marketing strategies.I

At the same time, AI poses enormous ethical challenges, such as privacy and transparency and deepfakes. Lack of transparency leads to biased or inaccurate reporting, undermining public trust in the media. There’s the question of truth: How do we discern fact from fabrication in an age where AI can craft stories so convincingly real? Further, there’s the matter of agency: Are we, as consumers of news, becoming mere pawns in a giant game of AI-driven agendas?

There are several studies examining public perception of these issues. Research done at the University of Delaware finds that most Americans support the development of AI but also favor regulating the technology. Experiences with media and technology are linked to positive views of AI, and messages about the technology shape opinions toward it.

Most Americans are worried that the technology will be used to spread fake and harmful content online (70 per cent). In Serbia a study has been conducted of public attitudes towards AI within the research project Ethics and AI: Ethics and Public Attitudes towards the use of AI.

The results showed that although most respondents have heard of AI, 4 per cent of them do not know anything about AI. Respondents with more knowledge about AI also have more positive attitudes towards its use. It has been shown that people are more informed about AI through the media compared to being informed about this topic through education and profession.

To the statement, “I am afraid that AI will increasingly be used to create fake content (video, audio, photos), and that there is digital manipulation,” 15.2 per cent gave a positive answer, while 62.4 per cent gave a negative response (22.4 per cent are neutral about this question). These results suggest a need to educate the public about potential challenges and ways to prevent them.

Grappling with AI’s Dual Role in Shaping and Skewing News


Illustration: Unsplash.com

According to the Cambridge Dictionary, fake news is defined as false stories that appear to be news spread on the internet or using other media, usually created to influence political views, or as a joke. The Oxford English Dictionary defines fake news as false news stories, often of a sensational nature, designed to be widely shared or distributed to generate revenue or promote or discredit a public figure, political movement, company, etc. Fake news often has propaganda, satire, parody, or manipulation elements.

Other forms of fake news are misleading content, false context, impostor, manipulated, or fabricated content. Fake news has increased on the internet, especially on social media. After the 2016 US elections, fake news dominated the internet. In May this year, posts about the death of the American billionaire George Soros on social media turned out to be fake news.

There is ongoing active research on numerous tactics to combat fake news. Authorities in both autocratic and democratic countries are establishing regulations and legally mandated controls for social media platforms and internet search engines. Google and Facebook introduced new measures to tackle fake news, while the BBC and the UK’s Channel 4 have established fact-checking sites. In Serbia, there is FakeNews Tracker, a portal that searches for inaccurate and manipulative information. The portal is dedicated to the fight against disinformation in media that publish content in the Serbian language.

The mission of the FakeNews Tracker is to encourage the strengthening of media integrity and fact-based journalism. When you see suspicious news, you can report it through the form on their page, after which they check the news. If they find it fake, they publish an analysis. In neighbouring Croatia, a similar fact-checking media organization is Faktograf.

On the individual level, we need to develop critical thinking and be careful when sharing information. Digital media literacy and developing skills to evaluate information critically are essential for anyone searching the internet, especially for young people. Confirmation bias can seriously distort reasoning, particularly in polarised societies.

How AI is Reshaping the Balkan Media Landscape

How does AI shape fake news? AI can be used to generate, filter, and discover fake news. AI’s power to simulate reality, generate human-like texts, and even fabricate audiovisual content has enabled fake news to flourish at an unprecedented rate. There are fake news generators and fake news trackers.

A recent example of the first usage was the news about Serbia ordering 20,000 Shahed drones from Iran, which AI entirely generated. It was then published by some major and credible media outlets. Bosnian media published this news under the headline “Serbia is arming itself”. It turned out that AI made a mistake. Serbia’s Deputy Foreign Minister Aleksić did visit Tehran and met his Iranian counterpart Ali Bagheri. However, there was no information about Serbia ordering Shahed drones. Another example is deepfake, a video of a person whose face or body has been digitally altered to appear to be someone else, typically used maliciously or to spread false information.

Previously, the victims included Donald Trump and Vladimir Putin, and recently, Serbia’s Freedom and Justice Party president, Dragan Đilas. The owner of Serbia’s Pink TV, Željko Mitrović, created a satire with the help of AI technology in which Đilas is a guest on the show Utisak Nedelje and pronounces fictional content generated by deepfake technology. The problem is that the fabricated statements were shown in Pink’s evening news bulletin (Nacionalni dnevnik) without the audience being adequately informed that it was a satirical fabricated speech while it was running. This is an example of misuse of AI.

Announcing a series of legal measures against the owner of Pink, including a lawsuit, Đilas appealed for the new regulation to prohibit the editing of such recordings because they contradict the fundamental guarantees of the European Convention on Human Rights and the Personal Data Protection Act. He also pointed out that this is very dangerous and that the statements of state representatives can be falsified in the same way, endangering the entire country.

AI, with its labyrinthine algorithms and deep learning capabilities, can shape our perceptions more than any propaganda leaflet or radio broadcast of yesteryears.

AI in the media can also detect and filter fake news. Deep learning AI tools are now being used to source and fact-check a story to identify fake news. One example is Google’s Search Algorithm, designed to stop the spread of fake news and hate speech. Websites are fed into an intelligent algorithm to scan the sources and predict the most accurate and trustworthy versions of stories.


Illustration: Unsplash.com

Why should the Balkans care? This region, marked by its tumultuous history, fragile relationships between these countries, and diverse ethnic tapestry, is especially vulnerable. AI-driven disinformation can easily rekindle past animosities or deepen current ones. Recent incidents in Serbia, where AI-generated stories incited unnecessary panic, are poignant reminders. Furthermore, the Balkans, like the rest of the world, face a constant battle over media trust. A single AI-generated yet convincingly real misinformation campaign can erode already waning trust in genuine news outlets.

This debate raises the question: Is freedom of speech more important than the potential for harming fake news and deceptions? I would vote for freedom of speech, but speech that is informed and veridical.

To tackle this, we need strategies:

  1. Enhanced Media Literacy and Education: Educational institutions across Serbia and its neighbours should integrate media literacy into their curricula. As a part of school curricula and community workshops across the Balkans, media literacy can arm the population with the critical thinking tools needed in this digital age. By teaching individuals to critically evaluate sources, question narratives, and understand the basics of AI operations, we’re equipping them with tools to discern the real from the unreal.
  2. Transparent Algorithms: The algorithms behind AI-driven platforms, especially those in the media space, should be transparent. This way, experts and the public can scrutinize and understand the mechanics behind information dissemination.
  3. Ethical AI Development: AI developers in Serbia and globally need to embed ethical considerations into their creations.
  4. Regulatory Mechanisms: While over-regulation can stifle innovation, a balanced approach where AI in media is subjected to ethical guidelines can ensure its positive use.
  5. Collaborative Monitoring: Regional collaboration can create a unified front against fake news. Media outlets across the Balkans can join forces to fact-check, verify sources, and authenticate news, thereby ensuring a cleaner information environment.
  6. Public-Private Partnerships: Tech companies and news agencies can forge alliances to detect and combat fake news. With tech giants with vast resources and advanced AI tools, such partnerships can form the first line of defense against AI-driven misinformation.

It is evident that AI will be shaping the future of media and journalism. The challenges AI poses in media reporting, particularly in the propagation of fake news, are significant but not insurmountable. Finding the proper equilibrium between maximizing AI’s advantages and minimizing its possible dangers is essential. This necessitates continuous dialogue and cooperation among journalists, tech experts, and policymakers.

With a harmonized blend of education, transparency, ethical AI practices, and collaborative efforts, Serbia and the entire Balkan region can navigate their way through the shadows of this digital cave, ensuring that truth remains luminous and inviolable.

Marina Budić is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she conducted a funded research project, Ethics, and AI: Ethics and public attitudes towards the use of artificial intelligence in Serbia, and presented her project at the AAAI/ACM Conference on AI, Ethics, and Society (AIES) at Oxford University 2022.

The opinions expressed are those of the author only and do not necessarily reflect the views of BIRN.

Kosovo Media Criticise Call for State Regulation of Online Content

The Press Council of Kosovo, PCK, and the Association of Journalists of Kosovo, AJK, on Wednesday voiced concern over the proposed regulation of online media content under the Law on the Independent Media Commission, IMC, deeming it a violation of “international rules of journalism”.

The IMC is an independent state body that regulates, manages, and oversees TV broadcasting in Kosovo but now it has said it wants video production on local websites added to its jurisdiction. Print media are already monitored by the PCK.

The PCK is a self-regulatory body formed by the print media in Kosovo, which is recognized by the Assembly of Kosovo through the Law on Defamation and Insult. Rulings that the PCK issues for parties and the media are “respected and valued by local courts in cases where they decide for defamation and insult”.

“Each of the media should be held accountable for their actions before state bodies, based on relevant laws, but initially no one can better assess their ethics than the media themselves, or professionals of the field,” the PCK and AJK said in a joint press release.

The reaction comes after the IMC head, Xhevat Latifi, said a new law on the IMC should include audio-visual content of websites within its auspices.

Latifi said this at a presentation of the IMC’s Annual Report for 2020 to parliament’s Committee on Local Government, Regional Development and Media on Tuesday. “We are witnessing a toxic state of media vocabulary in Kosovo,” Latifi said, justifying the initiative.

Later he told BIRN that the initiative was not his own and explained it as “concern of society”.

“I have stated that portals which deal with audio-visual production would best be included in the new law; not all portals, only these which deal with audio-visual parts. It is only a request. We are only measuring the public, their concerns. I have presented it as a concern of society, we cannot say this is my opinion or IMC position,” he said.

The Press Council and journalists’ associations deem the idea dangerous.

“Initiatives to control and evaluate ethics for print and online media by a state organisation are harmful and do not help the media and journalists,” their press release said.

North Macedonia Threatens to Block Telegram Over Pornographic Picture Sharers

North Macedonia’s authorities on Thursday threatened to block the messaging app Telegram over the activities of a group of more than 7,000 users who have been sharing and exchanging explicit pictures and videos of girls – some of whom are underage.

Some users even wrote the names and locations of the girls. Others have shared photoshopped images taken from their Instagram profiles.

Prime Minister Zoran Zaev said the authorities would not hesitate to block Telegram if they had to – and if the messaging app didn’t permanently close this and similar groups.

“If the Telegram application does not close Public Room, where pornographic and private content is shared by our citizens, as well as child pornography, we will consider the option of blocking or restricting the use of this application in North Macedonia,” Zaev wrote in a Facebook post.

The group, called Public Room, was first discovered in January 2020. The authorities then said that they had found the organisers and had dealt with the matter.

However, a year later, the group has re-emerged, sparking a heated debate in North Macedonia over police inaction.

Several victims whose pictures and phone numbers were hacked and used have complained about what happened to them – and about what they see as lack of action of the part of the authorities in preventing it.

“I started receiving messages and calls on my cell phone, Viber, WhatsApp, Messenger and Instagram,” one 28-year-old victim, Ana, recalled in an Instagram post.

“I didn’t know what was happening or where it was coming from. The next day, I received a screenshot of my picture, which was not only posted in Public Room but shared elsewhere. I didn’t know what to do. I panicked, I was scared, I’d never experienced anything like that,” she added.

But the woman said that when she told the police about what happened, they told her they couldn’t do much about it, since she wasn’t a minor.

North Macedonia’s Minister of Interior, Oliver Spasovski, said on Thursday that the police had arrested four people in connection with the revived group and had launched a full-scale investigation.

“We have identified more people who will be detained in the coming period, so we can reach those who created this group, and also those that are abusing personal data within the group. We are working on this intensively with the Public Prosecutor,” Spasovski told the media.

However, following closure of the group on Thursday, there have been reports that some of its users are opening new groups where they continue the same practices.

Prime Minister Zaev said users of this and similar groups needed to heed a final warning.

“I want to send a message to all our citizens who are sharing pictures and content in that group [Public Room] … to stop what they are doing that and leave the group,” said Zaev on Facebook.

“At the end of the day, we will get the data, you will be charged and you will be held accountable for what you do,” he concluded.

COVID’s Toll on Digital Rights in Central and Southeastern Europe

The report presents an overview of the main violations of digital rights in Bosnia and Herzegovina, Croatia, Hungary, Kosovo, Montenegro, North Macedonia, Romania and Serbia between January 31 and September 30, 2020, and makes a series of recommendations for authorities in order to curb such infringements during future social crises.

A first report, compiled by BIRN and which contained preliminary findings, showed a rise in digital rights violations in Central and Southeastern Europe during the pandemic, with over half of cases involving propaganda, disinformation or the publication of unverified information.

The global public health crisis triggered by the coronavirus exposed a new the failure of states around the world to provide a framework that would better balance the interests of safety and privacy. Instead, the report documents incidents of censorship, fake news, security breaches and concentration of information.

More than 200 pandemic-related violations tracked

At the onset of the pandemic, numerous violations of digital rights were observed – from violations of the privacy of persons in isolation to manipulation, dissemination of false information and Internet fraud.

BIRN and Share Foundation documented 221 violations in the context of COVID-19 during the eight-month monitoring period, the largest number coming during the initial peak of the pandemic in March and April – 67 and 79 respectively – before slowly declining.

The countries with the highest number of violations to date are Serbia, with 46, and Croatia, with 44.

The most common violation – accounting for roughly half of all cases – was manipulation in the digital environment caused by news sites that published unverified and inaccurate information, and by the circulating of incomplete and false data on social media.

This can be explained in large measure by the low level of media literacy in the countries of the region, where few people actually check the news and information provided to them, while the media themselves often publish unverified information.

The most common targets of digital rights violations were citizens and journalists. However, both of these groups were frequently also among the perpetrators.

Contact tracing apps: Useful or not?

The debate about the use of contact-tracing apps as a method of combating the spread of COVID-19 was one of the most important discussions in Croatia and North Macedonia.

At the very beginning of the pandemic, the Croatian government led by the conservative Croatian Democratic Union, HDZ, proposed a change to the Electronic Communications Act under which, in extraordinary situations, the health minister would request from telecommunications companies the location data of users.

Similarly, Macedonian health authorities announced they were looking to use “all tools and means” to combat the virus, with North Macedonia among the first countries in the Western Balkans to launch a contact-tracing app on April 13.

Developed and donated to the Macedonian authorities by Skopje-based software company Nextsense, the StopKorona! app is based on Bluetooth distance measuring technology and stores data locally on users’ devices, while exchanging encrypted, anonymised data relevant to the infection spread for a limited period of 14 days. According to data privacy experts, the decentralised design guaranteed that data would be stored only on devices that run the app, unless they voluntarily submit that data to health authorities.

Croatia launched its own at the end of July, but by late August media reports said the Stop COVID-19 app had been downloaded by less than two per cent of mobile phone users in the country. The threshold for it to be effective is 60 per cent, the reports said.

Key worrying trends mapped

Illustration: Olivia Solis

Bosnia and Herzegovina saw a number of problems with personal data protection, free access to information and disinformation. In terms of disinformation, people were exposed to a variety of false and sometimes outlandish claims, including conspiracy theories about the origin of the coronavirus, its spread by plane and various miracle cures.

Conspiracy theories, like those blaming the spread of the virus on 5G mobile networks, flourished online in Croatia too. One person in Croatia destroyed their Wifi equipment, believing it was 5G.

In Hungary, fake news about COVID-19 arrived even before the virus itself, said journalist Akos Keller Alant, who monitored the digital environment in Hungary.

Several clickbait fake news sites published articles about COVID-19 victims a month before Hungary’s first confirmed case. The Anti-Cybercrime Unit of the Hungarian police arrested several people for spreading fake news, starting in early February when police raided the operators of a network of fake news sites.

In Kosovo, online media emerged as the biggest violators of digital rights by publishing unverified and false information as well as personal health information. Personal data rights were also violated by state institutions and public figures.

In Montenegro, the most worrying digital rights violations concerned privacy and personal data protection of those infected with the coronavirus or those forced to self-isolate.

The early days of the pandemic, when Montenegro was among the few countries that could claim to have kept a lid on the virus, was a rare moment of social and political consensus in the country about how to respond, said Tamara Milas of the Centre for Civic Education in Montenegro, an NGO.

The situation changed, however, when the government was accused of the gross violation of the right to privacy and the right to the protection of personal data.

Like its Western Balkan peers, North Macedonia was flooded with unverified information and claims shared online with regards the pandemic. Some of the most concerning cases included false claims about infected persons, causing a stir on social media.

In Romania, the government used state-of-emergency powers to shut down websites – including news and opinion sites – accused of spreading what authorities deemed fake news about the pandemic, according to BIRN correspondent Marcel Gascon, who monitors digital rights violations in Romania.

In Serbia, a prominent case concerned a breach of security in the country’s central COVID-19 database. For eight days, the login credentials for the database, Information System COVID-19, were publicly available on the website of a public health body.

In another incident, the initials, age, place-of-work and personal address of a person infected with the virus were posted on the official webpage of the municipality of Sid in western Serbia as well as on the town’s social media accounts.

In the report, BIRN and Share Foundation conclude that technology, especially in a time of crisis, should not be seen as the solution to complex issues, be that protection of health or upholding public order and safety. Rather, technology should be used to the benefit of citizens and in the interest of their rights and freedoms.

When intrusive technologies and regulations are put in place, it is hard to take a step back, particularly in societies with weak democratic institutions, the report states. Under such circumstances, the measures applied in one crisis for the protection of public health may one day be repurposed and used against other “social plagues”, ultimately leading to reduced human rights standards.

To read the full report click here. For individual cases, check our regional database, developed together with the SHARE Foundation.

Facebook-Partnered Croatian Fact-Checkers Face “Huge Amount of Hatred”

A leading Croatian fact-checking site, which has partnered with Facebook to weed out misinformation on the platform, says it is facing “a huge amount of hatred” for the work it does, work that the site says has increased dramatically since the onset of the COVID-19 pandemic.

Croatian politicians, websites and users of social media have all taken aim at Faktograf in recent months, accusing it of censorship.

A member of the International Fact-Checking Network, IFCN, since 2017 and the only Croatian media specialised in verifying the accuracy of claims made in public, Faktograf says anti-vaccination groups are particularly sensitive to the debunking of fake news.

Since the onset of COVID-19, “The amount of misinforming content circulating on the internet has drastically increased as people spend more time on the internet, looking for answers to questions that bother them and trying to understand the sudden changes they see in the world around them,” said Faktograf editor-in-chief Petar Vidov.

“It’s mentally stressful to watch all day long how many people spread such misinformation, how fast such things are spreading, and then after all that, you get… a huge amount of hatred, threats, directed against Faktograf because of the work we do.”

“More or less, it is going well, but the problem is that there is that certain number of people you will never reach because they are simply grounded in their own beliefs for a long time, they reject argumented dialogue,” Vidov told BIRN in an interview.

So-called ‘anti-vaxxers’ perceive the debunking of fake news “as a threat to their agenda,” he said.

Falsely accused of ‘spying’ and deleting content


Illustration. Photo: EPA-EFE/LUONG THAI LINH.

Founded in 2015 by the Croatian Journalists’ Association and democracy advocates GONG, Faktograf last year became one of more than 20 organisations in 14 European Union countries partnering with Facebook in reviewing and rating the accuracy of articles posted on the social networking giant.

Social media users, online platforms and websites in Croatia say Faktograf is effectively censoring their opinions, a claim Vidov said was the result of a “misunderstanding of Facebook’s partnership with independent fact-checkers.”

“We do our job, we are debunking those inaccurate claims that spread in the public space and therefore we have our editorial policy, we determine what we will do,” he told BIRN.

“We prioritise things that endanger human health and that reach a large number of people.”

“Under the terms of that partnership, after we check some content and mark it as inaccurate, partially inaccurate or misinforming in some other way, for example through a fake headline, Facebook should reduce the reach of such content.”

Vidov stressed, however, that Faktograf had nothing to do with Facebook’s own removal of a wave of inaccurate content since the outbreak of the novel coronavirus at the start of the year.

“Faktograf has nothing to do with these removals, we are not working to remove that content, nor do we know which content is being removed.”

“However, people have developed this assumption that it is Faktograf that spies on their profiles and deletes their content from it.” Such assumptions are fuelling “unfounded” hostility towards Faktograf, he said.

Anti-vaxxers promoting conspiracy theories


A graffiti in Croatia’s capital that reads “Stop 5G”. Photo: BIRN. 

That has not stopped the likes of 34-year-old Croatian MP Ivan Pernar, who opposes vaccination, from taking to Facebook and YouTube on April 26 to criticise Faktograf, saying the site “determines what is true and censors those who think differently.”

In May, there were a number of small protests in Croatia calling for the suspension of all measures taken by the government to tackle the spread of COVID-19, to halt “violations of free speech” and a halt to the installation of a 5G wireless network “until it is proven not harmful.”

5G has become the focus of a widely-shared conspiracy theory linking the technology to the spread of the coronavirus. Faktograf has written extensively about the conspiracy theory and on Sunday, when another small protest was held in Zagreb against 5G one of those present held a banner describing those working for the site as “mercenaries.”

“At the very beginning of the pandemic, there was a lot of information about fake drugs [for coronavirus], theories about how you can test yourself for coronavirus and so on – misinformation that spread primarily out of ignorance, out of the people’s need to get some orientation in all this,” Vidov said.

“But very quickly, conspiracy theories have taken over the story.”

“What we now mostly see is misinformation directed against vaccines,” he said, describing the anti-vaxxer movement in Croatia and the Balkan region as “quite strong”.

“They took over the narrative about the virus and managed to form it in the direction of a big conspiracy of global elites who want to chip the entire population to be controlled, and will do so through a vaccine against coronavirus.”

Fact-checkers playing catch-up


Illustration. Photo: EPA-EFE/HARISH TYAGI.

Vidov, who previously worked at online news site Index.hr, said those who spread misinformation are usually motivated by money.

“People simply make money from it because they generate traffic which they then monetize through advertising services like Google Ad Sense and the like,” he said. They themselves are rarely the originators of such narratives, but simply pick them up “most often from propagandists trying to achieve something.”

“The problem is that this misinformation, no matter how it is created… enters the system in which there are a large number of people who want to make money on this type of content and then they expand it and actually increase the reach of that damage, of that propaganda.”

Those who end up believing the misinformation are not “actors” but “victims” in the process, he said.

“Our education systems have not educated people well enough to be consumers and readers of media content, which is why we have a problem with the fact that unfortunately, a large number of people are not able to spot the difference between a credible and a non-credible source of information”.

The low level of public trust in domestic as well as international bodies is another major factor, Vidov argued.

Fact-checkers, he said, have a tough task in front of them.

“It is frustrating that it takes a lot more time to debunk inaccurate information than it takes to place any misinformation, no matter how stupid and unconvincing it may be.”

Hiljade.kamera.rs: Community Strikes Back Against Mass Surveillance

Serbian citizens have launched the website hiljade.kamera.rs as a response to the deployment of state-of-the-art facial recognition surveillance technology in the streets of Belgrade. Information regarding these new cameras has been shrouded in secrecy, as the public was kept in the dark on all the most important aspects of this state-lead project.

War, especially in the past hundred years, has propelled the development of exceptional technology. After the Great War came the radio, decades after the Second World War brought us McLuhan’s “global village” and Moore’s law on historic trends. Warfare itself has changed too – from muddy trenches and mustard gas to drone strikes and malware. Some countries, more than others, have frequently been used as testing grounds for different kinds of battle.

Well into the 21st century, Serbia still does not have a strong privacy culture, which has been left in the shadows of past regimes and widespread surveillance. Even today, direct police and security agencies’ access to communications metadata stored by mobile and internet operators makes mass surveillance possible. 

As appearances matter most, control over the flow of information is a key component of power in the age of populism. We have recently seen various developments in this context – Twitter shutting down around 8,500 troll accounts pumping out support for the ruling Serbian Progressive Party and its leader and the country’s President Aleksandar Vucic. These trolls are also frequently used to attack political opponents and journalists, exposing the shady dealings of high ranking public officials. Reporters Without Borders and Freedom House have noted a deterioration in press freedom and democracy in the Balkan country.

However, a new threat to human rights and freedoms in Serbia has emerged. In early 2019, the Minister of Interior and the Police Director announced that Belgrade will receive “a thousand” smart surveillance cameras with face and license plate recognition capabilities, supplied by the Chinese tech giant – Huawei. Both the government in Serbia and China have been working on “technical and economic cooperation” since 2009, when they signed their first bilateral agreement. Several years later, a strategic partnership forged between Serbia’s Ministry of Interior and Huawei, paving the way to the implementation of the project “Safe Society in Serbia”. Over the past several months, new cameras have been widely installed throughout Belgrade.  

This highly intrusive system has raised questions among citizens and human rights organisations, who have pointed to Serbia’s interesting history with surveillance cameras. Sometimes these devices have conveniently worked and their footage is somehow leaked to the public, and in some cases, they have not worked or recordings of key situations have gone missing, just as conveniently. Even though the Ministry was obliged by law to conduct a Data Protection Impact Assessment (DPIA) of the new smart surveillance system, it failed to fulfil the legal requirements, as warned by civil society organisations and the Commissioner for Personal Data Protection

The use of such technology to constantly surveil the movements of all citizens, who are now at risk of suddenly becoming potential criminals, has run counter to the fundamental principles of necessity and proportionality, as required by domestic and international data protection standards. In such circumstances, when there was no public debate whatsoever nor transparency, the only remaining option is a social response, as reflected in the newly launched website. 

“Hiljade kamera” (“Thousands of Cameras”) is a platform started by a community of individuals and organisations who advocate for the responsible use of surveillance technology. Their goals are citizen-led transparency and to hold officials accountable for their actions, by mapping cameras and speaking out about this topic to the public. The community has recently started tweeting out photos of cameras in Belgrade alongside the hashtag #hiljadekamera and encouraged others to do so as well.

The Interior Ministry has yet to publish a reworked and compliant Data Protection Impact Assessment (DPIA) but the installation of cameras continues under sketchy legal circumstances.

Bojan Perkov is a researcher at SHARE Foundation. 


North Macedonia Web Portals Hustle for Election Ads Cash

The prospect of making a quick buck from budget money intended for election advertising has encouraged a staggering 235 web portals, many with obscure backgrounds and identities, to register at the State Electoral Commission, DIK, for a slice of the pie.

BIRN’s analysis of the DIK list of web portals, published in Macedonian language, reveals that many have questionable professional standards and unclear backgrounds and ownership.

Of the 235 web portals that have registered, 92 do not reveal who the journalists and editors who work there are. Of those 92, effectively hiding their staff teams, 44 publish political news; the rest cover other topics, or have no clear theme.

Most of the portals that did disclose their journalistic teams are run by just one or two persons, it was also noticed. There are also cases where one team of journalists works in several portals.

There is no data about the owners or founders of 19 of the portals that have applied for state cash. They are registered in the United States, Panama, or in other places, by companies that conceal their true owners.

Some 50 of the portals are not even registered with the web domain .mk. Some resemble blogs rather than news sites, and have domains such as .live, .info or just .com.

The April 12 general elections are the second in North Macedonia in which the state budget will cover political party adverts in the media.

North Macedonia introduced this practice for last year’s presidential elections, when 83 portals registered for the cash.

The law allows parties to apply or up to two euros for every voter who voted for them in the last elections. The state plans to reserve about 3.6 million euros for this purpose.

While it is expected that most of this sum will be spent on ads on TV and radio and in newspapers, the rules allow one party or alliance also to spend up to 15,000 euros for promotional purposes in a single portal.

The more portals a publisher registers, the bigger its potential gain.

The head of the State-Anti-Corruption Commission, Biljana Ivanovska, was among the first to warn about the problems arising from these loopholes.

In an interview for BIRN, in Macedonian language, in January, she said only web portals that are already registered with the National Council for the Media, SEMM, should be allowed to register at the DIK list as well.

At the moment, the SEMM register contains 101 web portals that have disclosed ownership and journalistic teams, as well as known price lists. They have also pledged to respect professional and ethical codes.

But when parliament last made changes to the electoral law, last month, it ignored this advice and left the situation as is, meaning that any web portal can be registered without scrutiny.

More than half of all the web portals that have registered for part of the state advertising cash are not on the SEMM list.


Among the analysed data from the current DIK register, BIRN observed other curiosities. In few cases, for example, a single publisher has registered several versions of the same portal.

The publisher Prva Republika [First Republic], for example, has registered its site “Republika” three times, counting Macedonian, Albanian and the English versions of the same site as three separate sites. The web site of TV 21, which airs in Albanian and Macedonian, is similarly registered twice.

The DIK register shows a similar trend in several smaller towns, like Ohrid, Kriva Palanka, Delcevo, Valandovo and others, where the same local publishers have registered more than one web portal.

To maximize potential profits, some of the big national TV stations have also registered their websites separately from their TV stations. Some newspapers and many local radio and TV stations have done the same.

Apart from informative portals, the list also shows that sites that follow sports, lifestyle, and automotive industry have also been registered.

Social Media a Help and Hindrance in Balkan Coronavirus Fight

Serbia has no confirmed cases of coronavirus yet, but on Tuesday a WhatsApp voice message began doing the rounds on social media claiming several people had already died from the virus in the capital, Belgrade.

“Doctors are strictly forbidden to talk about the virus,” the woman is heard saying on the message, which was published on several Serbian news portals.

A similar thing happened in neighbouring Croatia, where another WhatsApp message contained the claim that the first case had been recorded in the coastal city of Split, before authorities actually confirmed the first case in the capital, Zagreb, on February 25.

With its epicentre in Italy, Europe is grappling to contain the spread of Covid-19. In the Balkans, cases have been confirmed in Croatia, North Macedonia and Romania.

Governments and concerned experts and citizens in the region and elsewhere are taking to the Internet, social media and mobile phone messages to spread information.

But likewise they face what Italy’s foreign minister, Luigi Di Maio, has called an “infodemic” of false information and scaremongering in the media and online.

In Serbia, the interior ministry said on Wednesday that its Department of High-Tech Crime was trying to identify the women who made the WhatsApp recording claiming that coronavirus had already claimed its first victims in the country.

In Albania, prosecutors on February 24 announced investigations into what they called the “diffusion of fake information or announcements in any form aimed at creating a state of insecurity and panic among the people.”

Scientist: Behaviour ‘not in line with magnitude of danger’

Serbia’s Health Ministry has launched a website dedicated to the coronavirus outbreak, regularly posting updates, news, advice, contacts and warnings for those coming to Serbia from affected areas.

On Wednesday in Moldova, the government began sending mobile phone text messages telling Moldovans what symptoms to look out for and what steps they should take if they suspect they may have contracted the respiratory virus.

“Take care of your health. Call your family doctor immediately if you have a fever or cough. If you have returned from areas with Coronavirus and feel ill, call 112,” the SMS reads.

Croatian scientist Igor Rudan of the Centre for Global Health Research at the University of Edinburgh, Scotland, said on Wednesday the state of panic in Europe did not reflect the level of threat posed by Covid-19.

Even if the virus were to spread throughout Croatia, he wrote on Facebook, “the casualties should be at least roughly comparable with the number of cases of death from the flu or with the number of road traffic fatalities during the same period.”

“This panic is triggered by the persistent media coverage… rather than by generally accepted and scientifically-based knowledge about the coronavirus,” Rudan wrote. 

“If you started behaving differently than you did during the winter months, during the flu epidemic, for example, collecting food supplies or wearing masks on the streets, this is not the kind of behaviour that reflects the actual magnitude of the danger.”

The post has been shared 2,500 times.

The Covid-19 outbreak originated in the Chinese city of Wuhan in late December. 

According to the World Health Organisation, there are now more than 82,000 confirmed cases in 45 countries.

In the Balkans, there are three confirmed cases in Croatia, one in North Macedonia and one in Romania. More than 180 people are under supervision in Montenegro. In Serbia, 20 people have tested negative for the virus, while several Serbian citizens who recently travelled to affected areas are in quarantine in Belgrade and the nearby town of Sabac, the public broadcaster reported.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now