The violations recorded in the second half of October show that routine digital violations are not disappearing. Hate speech, discrimination and war-mongering flourish in Bosnia’s digital environment, and, following the introduction of a new decree by the President of the Serb-led entity, Republika Srpska, digital violations have accelerated further.
Local elections in Hungary and North Macedonia, where ruling parties suffered setbacks, also caused a rise in violations, triggered by a climate of political antagonism.
Finally, in Serbia and Romania, the presence of unsolved issues at home resulted in the resurgence of the one and the same violations.
Hate Speech and War-mongering Rhetoric Poison Bosnia
With 45 violations recorded in our database out of a total of 101 cases between August 1, 2020, and August 31, 2021, hate speech and discrimination remain the most widespread form of violation in the Bosnian digital environment.
Following recent developments, including the entry into force of a presidential decree from Zeljka Cvijanovic, head of Bosnia’s Serb-dominated entity, Republika Srpska, aimed at not complying with a state law banning the denial of genocide and war crimes, there has been a further acceleration in hate speech and war-mongering rhetoric in the country.
Two hate speech and warmongering incidents were recorded in the second half of October. After the release of a video on Twitter on October 22 from the online news outlet Istraga, several comments inciting ethnic hatred and war propaganda showed up. Footage had showed Dragan Lukač, RS Minister of Interior, with members of the RS special forces doing exercises in Jahorina.
The second case involved Muhamed Velic, a Muslim cleric in Sarajevo, who called for war on his Facebook page, garnering 2,200 likes and 60 shares. The post, published on October 16 and later removed, said: “Ammunition in Konjic and Gorazde! Howitzers in Travnik! RPGs in Hadžići! Etc. Trust yourself and your hooves! They know that this is not a joke and that Bosnian might is not a small cat!” The message, which was then shared on Twitter by Bosnia’s consul in Frankfurt, Admir Atović, forced the country’s Foreign Ministry to intervene and seek urgent clarifications from him.
Hungarian Opposition Primaries Prompt Flow of Digital Violations
The 2021 Hungarian opposition primary, held in two rounds between September 18 and October 16, featured a harsh political confrontation between opposition candidates and the ruling Fidesz party. The stakes were high: to choose the challenger against Prime Minister Viktor Orbán in next year’s parliamentary elections. After the second round of the primary, voters elected Peter Marki-Zay, the conservative mayor of Hódmezővásárhely, to lead the opposition into the 2022 parliamentary election.
Before and during the primaries, a series of cyberattacks were carried out. The opposition asked Ferenc Frész, a senior cyber defence expert, to investigate the causes and origins of these DDoS attacks. The aftermath of the election after the second round was also a breeding ground for online violations. Three independent media outlets were attacked on announcing the primary election results. The pro-government website, Origo, was also repeatedly hit by DDoS attacks between October 22 and 24, making the site inaccessible. Internal investigations suggested that unknown individuals externally attacked the website. In the final days of the primaries, strange advertisements, apparently promoting the main opposition candidate, appeared in the news feeds of several Hungarian Facebook users, claiming that Márki-Zay was building a “new Fidesz” party. The messages quoted and distorted many of his statements on subjects like the corporal punishment of children.
Another incident recorded in our database involved the temporary suspension and unavailability of Valasz.hu, a website storing the complete archive of Heti Válasz, a conservative weekly established by Fidesz in 2001 and shut down in June 2018, after Lajos Simicska, a business magnate close to Orban, bought its publisher. As reported earlier by BIRN, Hungary remains a critical country in terms of the role of genuinely independent media. Members of Orban’s closest circle now own almost 88 media outlets.
Interference in North Macedonia’s Election Alleged, COVID Certificates Hacked
In the second half of October, political confrontation worsened in North Macedonia following two rounds of local elections on October 17 and 31. As Balkan Insight reported, the elections were of crucial significance, as the opposition VMRO-DPMNE party, for the first time since 2017, re-established itself as the dominant political force, also declaring that it now had the strength in parliament to lead a government.
On October 24, Stevcho Jakimovski, leader of the Citizen Option for Macedonia Party GROM and a candidate in the local elections for the municipality of Karpos, claimed that Chinese troll farms targeted his Facebook profile. He called on political rivals to behave ethically and not engage in such campaigns during the election. GROM, in coalition with VMRO-DPMNE at national level, ran alone in the Karpos mayoral race. On October 29, as our new focus page on COVID-19 Crisis and Tech Response reported, the Ministry of Health withdrew its EU digital certificates and QR codes, following a hacker attack.
Users of a forum said the hackers, who broke into the system and started issuing QR codes, using data from Macedonian citizens, penetrated the unprotected Macedonian server, from where they managed to get the key to the codes. IT.mk, a Macedonian information technology portal, showed how easy it was to bypass the national health system and has shared several posts of Twitter users with valid certificates, issued for Adolf Hitler, Sponge Bob and other dead or fictitious characters.
COVID-19 Fake News and Online Harassment Persist in Romania
Following a global trend, Romania’s digital environment is experiencing a rise in fake news, misinformation, and other manipulative content on the COVID-19 pandemic. Romania’s online space also continued to record a high number of episodes of misogyny towards women, especially those working in education. For instance, on January 6, a former presidential candidate and TikTok influencer, Alexandru Cumpanasu, was arrested for sending comments of a sexual nature, and instigating hatred and discrimination, against teachers and professors. Some violations that occurred in October confirm this trend in Romania’s digital environment.
On October 19, Piatra Neamț County Police opened a criminal investigation into the spread of false information after a woman streamed herself on Facebook in front of a critical care ward, where COVID patients were being treated in Piatra Neamț, north-east Romania. The woman, filming from a distance, claimed that “there is no one” inside the clinic, suggesting the pandemic was fiction. The video also became known thanks to a Facebook post of Oana Gheorghiu, cofounder of the NGO Dăruiește Viață, who immediately reported the incident.
A second case concerned Florentina Golea, a schoolteacher who was harassed after posting photos on Facebook while teaching a class of 12-year-old girls on the importance of vaccination. On October 5, RO vaccinare, the official page of the National Committee for Vaccination, promoting the vaccination campaign in Romania, shared photos from the teacher’s profile on Facebook. After that, the teacher received hundreds of insulting comments via Facebook, from “profiteer” and “be ashamed” to “monster” and “criminal”. The teacher also received death threats from people who claimed to know where she lived and the address of her school in Tecuci, in Galați County. Sorin Cîmpeanu, Minister of Education, announced that he would support the teacher if she sued those who had harassed her on Facebook.
COVID-19 Manipulation and Threats to Journalists in Serbia
Manipulation, conspiracy theories and other fake news have spread fast in Serbia’s online environment, where most cases still seem to be linked to the COVID pandemic.
Recently, a case was uncovered where some citizens were wrongly prescribed anti-parasite treatment for COVID via a Viber group. At the same time, alarmingly, Serbia stands out as one of the countries with the most attacks on independent journalists. Between August 1, 2020, and August 31, 2021, 30 out of a total of 111 such cases targeted journalists. BIRN editor and investigative journalist Ivana Jeremić was threatened by a Twitter user last December 2.
The latest cases recorded by our monitoring team confirm this trend in the Serbian digital space.
On October 10, after Serbian virologist Ana Banko stated on Radio Television of Serbia RTS that vaccinated citizens can transmit the Delta strain of the coronavirus, part of her statement was spread on social media with the intention of manipulating her words. The video shared by many users, together with the title, took the sentence out of context, leading readers to the wrong conclusion. The virologist was answering a series of questions on a talk show, and her intention was not to diminish the effects of the vaccine but only to emphasize the speed of transmission of the new Delta variant.
On October 21, meanwhile, online threats targeted two Serbian journalists, Jovana Gligorijević and Snežana Čongradin, the historian Dubravka Stojanović and the literary critic, Jelena Lalatović.
The threats, which have been condemned by the Independent Association of Journalists of Serbia, were misogynistic and anti-feminist, and were posted from an anonymous Twitter account. This is not the first-time threats have been sent from this account. A year ago, the journalist Vesna Mališić was also threatened by the same profile, which called for a lynch and her murder.
We would like to hear from parents and teachers willing to share their experience with us to help in an upcoming investigation into the safety of children and young teenagers using TikTok.
Scroll down for more information about how to take part.
The key things we want to know:
What steps did parents take to protect their children and young teenagers on the platform?
Were there any cases in which children and young teenagers were the targets of bullying, identity theft, privacy issues etc.?
If/how the potential danger in the digital environment is harming their childrens’ physical safety?
What do teachers know about the network and how do they educate children about it?
We will not publish any documents or names without prior consent and we do not plan to use specific examples, but rather show more general systemic problems. Your responses are secure and encrypted.
Your stories will be used to help us with an ongoing investigation.
Digital rights violations have been rising across Southeastern Europe since the beginning of the COVID-19 pandemic with a similar pattern – pro-government trolls and media threatening freedom of expression and attacking journalists who report such violations.
“Working together is the only way to raise awareness of citizens’ digital rights and hold public officials accountable,” civil society representatives attending BIRN and Share Foundation’s online event on Thursday agreed.
The event took place after the release of BIRN and SHARE Foundation’s report, Digital Rights Falter amid Political and Social Unrest,published the same day.
“We need to build an alliance of coalitions to raise awareness on digital rights and the accountability of politicians,” said Blerjana Bino, from SCiDEV, an Albanian-based NGO, closely following this issue.
When it comes to prevention and the possibilities of improving digital competencies in order to reduce risks about personal data and security, speakers agreed that digital and informational literacy is important – but the blame should not be only put on users.
The responsibility of tech giants and relevant state institutions to investigate such cases must be kept in mind, not just regular cases but also those that are more complicated, the panel concluded.
Uros Misljenovic, from Partners Serbia, sees a major part of the problem in the lack of response from the authorities.
“We haven’t had one major case reaching an epilogue in court. Not a single criminal charge was brought by the public prosecutor either. Basically, the police and prosecutors are not interested in prosecuting these crimes,” he said. “So, if you violate these rights, you will face no consequences,” he concluded.
The report was presented and discussed at an online panel discussion with policymakers, journalists and civil society members around digital rights in Southeast Europe.
It was the first in a series of events as part of Platform B – a platform that aims to amplify the voices of strong and credible individuals and organisations in the region that promote the core values of democracy, such as civic engagement, independent institutions, transparency and rule of law.
Between August 2019 and December 2020, BIRN and the SHARE Foundation verified more than 800 violations of digital rights, including attempts to prevent valid freedom of speech (trolling of media and the public engaged in fair reporting and comment, for example) and at the other end of the scale, efforts to overwhelm users with false information and racist/discriminatory content – usually for financial or political gain.
The lack of awareness of digital rights violations within society has further undermined democracy, not only in times of crisis, the report reads, and identifiers common trends, such as:
Democratic elections being undermined
Public service websites being hacked
Provocation and exploitation of social unrest
Conspiracy theories and fake news
Online hatred, leaving vulnerable people more isolated
Tech shortcuts failing to solve complex societal problems.
The report, Digital Rights Falter amid Political and Social Unrest, can be downloaded here.
Starting from Thursday, Serbia, Montenegro, Bosnia and Herzegovina, North Macedonia, Albania and Kosovo are dropping all roaming charges.
This means that citizens of these countries can make phone calls and send messages across the Western Balkan region without restrictions, paying the same prices as if they are in their home country.
However, the scrapping of roaming fees comes with a caveat, a warning for travelling citizens not to get too comfortable with their internet usage while abroad as there still might be some restrictions applied depending on their provider.
“Users are advised to always check their internet plans with their telecom providers, before travelling,” North Macedonia’s Agency for Electronic Telecommunications, AEK, said.
When it comes to internet traffic, in practice this means that some restrictions might apply, meaning that with some plans users might not be able to use all of their internet traffic from their home plan while abroad, the AEK explained,
Serbia’s Telenor provider explained that the use of the internet abroad will depend on the plan the users have.
“The amount [of internet traffic] depends on the monthly subscription for the tariff plan that users have, so there is no single unified amount, but it varies depending on the plan,” Telenor told N1 media outlet Thursday.
To prevent possible misuse of potentially lower prices in neighbouring countries, authorities across the region also said that while users can buy SIM cards in the neighbourhood, they will be able to use them only four months before they expire.
Roaming charges in the Western Balkan region were abolished in accordance with the Regional Roaming Agreement signed in 2019 at the second Western Balkans Digital Summit in Belgrade.
Countries from the region signalled that the next step would be mulling ways to reduce roaming charges between Western Balkans and the EU. For that purpose, a draft is expected to be prepared by the end of this year.
When the global pandemic halted our “offline” lives, we moved meetings, dinners and parties, shopping, protests to the online sphere. As we sought comfort, education, business and social life in the digital, our only public sphere also became overwhelmed with content designed to manipulate and misinform citizens.
Journalists, civil society activists, officials and the general public have faced vicious attacks – including verbal abuse, trolling, smear campaigns and undue pressure to retract content – in response to publishing information online. Many of our data were stolen, and our privacy endangered. Surveillance flourished.
In the period from August 2019 until December 2020, BIRN and the SHARE Foundation were gathering information on digital rights violations in Bosnia and Herzegovina, Croatia, North Macedonia, Hungary, Romania and Serbia, and our monitoring shows violations of digital rights continued at an alarming rate in all six countries.
As all six held elections during this period – local, general and/or presidential – our findings raise serious concerns about how the digital arena has been effectively hijacked to propagate fake news, hate-fuelled conspiracy theories and misinformation in support of offline efforts to sabotage democratic processes.
Just when people needed factually-correct information and governments needed close scrutiny to ensure high standards in public life, cyberattacks were launched against state bodies and the public were overwhelmed with false information and discriminatory content designed to manipulate voting and/or stoke hatred of particular groups.
Governments, on the other hand, used the pandemic to curb freedom of expression, abused health data, while many public institutions failed to meet standards of free and open internet.
During this period, BIRN and the SHARE Foundation verified more than 800 violations of digital rights including efforts to prevent valid freedom of speech (trolling of media and general public engaged in fair reporting and comment, for example) and at the other end of the scale, efforts to overwhelm users with false information and racist/discriminatory content – usually for financial or political gain.
Most online violations we monitored were under the category of pressures because of expression and activities (375) while the fewest violations monitored were classified as holding intermediaries liable (0).
Action was taken in just 21 per cent of cases, which usually entailed – depending on the type of violation – removing articles or deleting posts and/or comments by the general public and public sector organisations. During the COVID-19 crisis, we saw a rise in arrests of citizens accused of causing panic by publishing fake news on social media. Hungary, Serbia, Bosnia and Herzegovina were leading in this trend. Legal action, including arrests, penalties or other court action, was taken in less than 0.5 per cent of all monitored cases.
It is important to note that just as some violations included attempts to stifle free speech and frustrate freedom of expression through publishing falsehoods, not all legal actions launched to apparently hold intermediaries liable were legitimate attempts to protect freedom of speech. Some were cynical attempts against the public interest to block the publication of proven facts.
All these violations have contributed to an atmosphere dominated by fear and hatred with already vulnerable communities – such as LGBT+, groups identifying as female, migrants, particular ethnic groups – becoming subjected to worse and more frequent abuse, leaving them ever more isolated from support networks.
Those guilty of using the digital space to undermine democracy, intimidate others from publishing the truth or to spread malicious falsehoods operate with impunity, not least because there is no meaningful sense in the region of what constitutes digital rights – never mind the desire to or means to protect those rights.
Our report is the first effort on the regional level to map current challenges in the digital sphere and aims to fill in the gaps in the research. We took an interdisciplinary approach and looked at the problems from the legal, political, tech and societal angle, as an attempt to show that the problems and solutions to these violations should also be holistic and overarching. We also want to highlight these issues, as the lack of awareness of digital rights violations within society further undermines democracy, not only in times of crisis.
We don’t see the internet only as open and transparent but also see digital evolution as a set of mechanisms and tools that have great potential to serve the needs of people, and let’s not forget that internet access has proved indispensable in times of crisis such as in the COVID-19 pandemic.
We hope this report will serve not just for stock taking but be understood as a map showing what and how to further advance our rights, and also as an invitation to everyone to join forces in making our digital world healthy, too.
Marija Ristic is regional director of Balkan Investigative Reporting Network. Danilo Krivokapic is director of SHARE Foundation.
Report “Digital Rights Falter amid Political and Social Unrest” can be downloaded here.
As part of our Platform B, we are also hosting a discussion with policy makers, journalists and civil society members around digital rights in the Southeast Europe. Register here.
Together with our partners, BIRN is launching a series of online and
offline events aimed to amplify the voices of strong and credible individuals
and organisations in the region that promote the core values of democracy, such
as civic engagement, independent institutions, transparency and rule of law.
As a primarily media organisation, we want to open space and provide a
platform to discuss and
reshape our alliances in light of the challenges facing democracies in
South-East and Central Europe.
This effort comes at a critical time when the region is seeing several
troubling trends: centralized power, reduced transparency, assaults on media,
politicized judiciaries, unchecked corruption, online violations and social
polarization – all amidst heightened geopolitical tensions and deep divisions
in Europe.
Due to the ongoing pandemic, Platform B event series will be organised
in accordance with all relevant health measures. As the situation improves, we
hope to be able to host some of the events in BIRN spaces in Sarajevo and
Belgrade, and elsewhere in the region.
The Platform B will be an opportunity for individuals and groups to meet monthly on selected topics.
Opening event:
Digital Rights Falter Amid Political and Social Unrest: What Now?
Date: 1 July, 2021 (Thursday)
Time: 15.00, CET
At this event, BIRN and SHARE Foundation will discuss its annual digital rights report,together with other members of the newly established SEE Network, talking about the key trends concerning the digital ecosystem.
We monitored digital rights in Bosnia and Herzegovina, Croatia,
Hungary, North Macedonia, Romania and Serbia and collected more than 1500 cases of online violations.
In Southern and Eastern Europe, where online disinformation campaigns
are endangering guaranteed individual freedoms, and while the decline in
internet safety has become a worrying trend, citizens with poor media and
digital illiteracy have been left without viable protection mechanisms.
The event participants will have an opportunity to discuss and hear reflections from representatives of: EDRi, Zasto ne?, Citizen D, Homo Digitalis, SCiDEV, Partners Serbia, Metamorphosis, Atina NGO, Media Development Center.
Second event: Freedom of Information in the Balkans: Classified,
Rejected, Delayed
Date: July 15, 2021 (Thursday)
Time: 14.00, CET
The global pandemic has been used as an excuse for many Balkan states
to not fully implement freedom of information laws, leaving the public in the
dark.
Transparency has been another victim of the COVID-19 pandemic.
While on paper, freedom of information laws are up-to-date in almost
all countries in the region, implementation is patchy at best and has grown
worse since governments clamped down on the flow of information with the onset
of the coronavirus.
Together with journalists, public information officers and colleagues
from Open Government Partnership we will reflect on the findings of BIRN’s
tracking institutional transparency report and offer recommendations on how to
make our institutions open and accountable.
Registration form will be available here soon.
Events in August and in the fall will focus on investigative journalism
and gender justice.
Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.
Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.
“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.”
Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.
Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.
But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.
The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.
Among BIRN’s findings are the following:
Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
The survey showed that people were more ready to report content targeting them or minority groups.
Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints.
The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.
Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.
Half of the posts containing hate speech remain online
Graphic: BIRN/Igor Vujcic
Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.
In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.
In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.
Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.
The debate over the benefits and the dangers of social media for 21st century society is well known.
In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators.
There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.
In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.
“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”
Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.
“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.
“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”
In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct.
The survey asked for the language of the disputedcontent, who was the target and who was the author, and whether or not the report was successful.
Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence.
The survey showed that people were more ready to report content targeting them or minority groups.
According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases.
Roughly half of reports of targeted harassment resulted in removal.
Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous.
“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”
Content removal – censorship or corrective?
Graphic: BIRN/Igor Vujcic.
According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules.
On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.
In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.
As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.
One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content.
“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”
Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.
Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.
Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.
“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”
Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.
It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.
BIRN methodology
BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.
The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.
“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said.
Social networks often remove content themselves, in what they call a ‘proactive approach’.
According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.
“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”
“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”
There is no available data, however, when it comes to content in a specific language or country.
Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”
Whatever the tools, the results are sometimes highly questionable.
In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s.
Facebook determined that Bursac’s post had violated “community standards,” local media reported.
Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.
“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”
Lack of language sensibility
SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.
“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.
Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”
“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”
The European Parliament has also stressed the importance of a combined assessment.
“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs.
“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”
Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.
“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.
“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”
Failing in non-English
Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”
“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”
For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.”
“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.
The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.
“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.
The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.
“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said.
“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”
Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked.
“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.
Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.
Machine learning
As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines.
The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.
When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”.
While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”
According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.
In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category.
Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”
“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.
Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.”
“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN.
“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”
Authorities in North Macedonia face an uphill battle to confront the dangers of online harassment, experts warn, following a public outcry over the reappearance of a group on the encrypted messaging app Telegram in which thousands of users were sharing explicit pictures and videos of women and girls, some of them minors.
The group, known as ‘Public Room’, was first shut down in January 2020, only to re-emerge a year later before it was closed again on January 29. Reports say new groups have since popped up, their membership spreading to neighbouring Serbia.
Authorities in the Balkan state have mooted the possibility of banning Telegram altogether and criminalising the act of stalking, making it punishable with a prison sentence of up to three years.
The case, however, has exposed the many layers that authorities need to address when it comes to preventing online harassment and sexual violence. And experts in the field say it will not be easy.
“This type of danger is very difficult to handle, given that many countries in the world have had the same or similar problems as North Macedonia,” said Suad Seferi, a cybersecurity analyst and head of the IT sector at the International Balkan University in Skopje.
Seferi cited blocks on Telegram in countries such as Azerbaijan, Bahrain, Belarus and China, but cautioned against following such a route given the risk of it being construed as censorship by those using the app for its primary purpose of simple communication.
“The government could try and reach an agreement, or communicate with Telegram to close specific channels or seek cooperation in prosecuting the perpetrators of such acts,” he told BIRN.
Law not being applied
An image showing the Telegram messenger app. Photo: EPA-EFE/MAURITZ ANTIN
The phenomenon has triggered heated debate in North Macedonia; a number of victims have spoken out publicly about how some of the 7,000 users of Public Room shared explicit, private photos of them or took pictures from their social media profiles and shared them alongside the names and phone numbers of the victims.
One of them, 28 year-old Ana Koleva, met Justice Minister Bojan Maricic over the weekend to discuss her own harrowing experience after her pictures began circulating in the Telegram group and elsewhere and she was bombarded with unwanted messages and phone calls.
Some victims, including Koleva, said they appealed to the police for help but were bluntly dismissed. One reason given by police was that they were unable to act unless the victim was a minor.
Critics say the group’s re-emergence exposes the failure of authorities to stamp it out in the first place.
“The ‘Public Room’ case revealed the inertia and inability of the authorities to act in such cases of violence and harassment of women and girls,” said Skopje-based gender expert Natasha Dimitrovska. “Although there are laws, they are not implemented.”
North Macedonia’s law on prevention and protection from violence against women and domestic violence also defines sexual harassment and especially online sexual harassment.
“This is in line with the Istanbul Convention, which states that all forms of sexual violence and harassment should be sanctioned,” said Dimitrovska. “In addition, endangering someone’s security and sharing and collecting other people’s personal data without permission are crimes that are already regulated by the Criminal Code.”
She told BIRN that it was imperative that authorities grasp the fact that whatever goes on online has repercussions offline.
“There is no longer a division between offline and online,” she said. “What happens online also has profound consequences in real life. Girls and women who are sexually harassed online are also restricted from accessing and expressing themselves freely in public.”
“What’s even worse is that everything that is online remains there forever and is widely available, so with online harassment it’s even more frightening in the sense that it will remain there for a long time and haunt the victim.”
‘Scary viral dimensions’
Illustration. Photo: Markus Spiske/Unsplash
Cybersecurity experts caution that it is extremely difficult to control or monitor content on platforms such as Telegram, which has become notorious for similar scandals.
In the US and the UK, there are laws against ‘revenge porn’, in which people share explicit pictures of their former partners as a form a retaliation. Six years ago, only three US states has such laws in place. They have since spread to at least 46.
Privacy and data protection expert Ljubica Pendaroska said some public ‘supergroups’ can have up to 200,000 members, which massively increases the chances of privacy violations.
“Usually, in the communication in such groups, the spectrum of personal data related to the victims is supplemented with address of residence, telephone number, information about family members, etc, Pendaroska told BIRN.
“So the invasion of privacy gets bigger and usually goes out of the group and the network, taking on really scary viral dimensions.”
Importance of raising public awareness
To combat such acts, experts advocate raising public awareness about privacy and how to protect it – particularly among parents and children – and punishing violations in a timely manner.
“From experience, young people know a lot about the so-called technical aspects, capabilities and impacts of social networks and applications, but little about their privacy and especially the potential social implications of what is shared in the online world,” said Pendaroska, who also serves as president of Women4Cyber North Macedonia, an initiative to support the participation of women in the field of cybersecurity.
“Our concept is to avoid occasional action but commit to consistent and continuous education of women about the potential risks that lurk in the online world,” she told BIRN, “because that’s the only way to achieve long-term results and to raise awareness.”
“Therefore, our plan is within each project or activity that we implement, to include exactly that component – through various activities and tools to educate women, because awareness is key.”
North Macedonia’s authorities on Thursday threatened to block the messaging app Telegram over the activities of a group of more than 7,000 users who have been sharing and exchanging explicit pictures and videos of girls – some of whom are underage.
Some users even wrote the names and locations of the girls. Others have shared photoshopped images taken from their Instagram profiles.
Prime Minister Zoran Zaev said the authorities would not hesitate to block Telegram if they had to – and if the messaging app didn’t permanently close this and similar groups.
“If the Telegram application does not close Public Room, where pornographic and private content is shared by our citizens, as well as child pornography, we will consider the option of blocking or restricting the use of this application in North Macedonia,” Zaev wrote in a Facebook post.
The group, called Public Room, was first discovered in January 2020. The authorities then said that they had found the organisers and had dealt with the matter.
However, a year later, the group has re-emerged, sparking a heated debate in North Macedonia over police inaction.
Several victims whose pictures and phone numbers were hacked and used have complained about what happened to them – and about what they see as lack of action of the part of the authorities in preventing it.
“I started receiving messages and calls on my cell phone, Viber, WhatsApp, Messenger and Instagram,” one 28-year-old victim, Ana, recalled in an Instagram post.
“I didn’t know what was happening or where it was coming from. The next day, I received a screenshot of my picture, which was not only posted in Public Room but shared elsewhere. I didn’t know what to do. I panicked, I was scared, I’d never experienced anything like that,” she added.
But the woman said that when she told the police about what happened, they told her they couldn’t do much about it, since she wasn’t a minor.
North Macedonia’s Minister of Interior, Oliver Spasovski, said on Thursday that the police had arrested four people in connection with the revived group and had launched a full-scale investigation.
“We have identified more people who will be detained in the coming period, so we can reach those who created this group, and also those that are abusing personal data within the group. We are working on this intensively with the Public Prosecutor,” Spasovski told the media.
However, following closure of the group on Thursday, there have been reports that some of its users are opening new groups where they continue the same practices.
Prime Minister Zaev said users of this and similar groups needed to heed a final warning.
“I want to send a message to all our citizens who are sharing pictures and content in that group [Public Room] … to stop what they are doing that and leave the group,” said Zaev on Facebook.
“At the end of the day, we will get the data, you will be charged and you will be held accountable for what you do,” he concluded.
Romania’s High Court of Cassation and Justice ruled on Tuesday that pretending to be someone else on Facebook is an offence punishable under the country’s criminal law.
The ruling arose from the case of a man sentenced to three years and eight months in prison for blackmail, digital fraud and breach of privacy for posting intimate images of his ex-girlfriend on a social network and opening pornography site accounts in her name.
According to the indictment, the man created the false social network account after threatening his former girlfriend in December 2018 that he would publish several videos of them having sex, as well as pictures in which she appeared naked, if she did not resume the relationship with him.
The case reached the High Court after the Court of Appeal in the Transylvanian city of Brasov in central Romania asked for its opinion about whether “opening and using an account on a social network opened to the public” to publish real “information, photographs, video images, etc.” could be considered digital fraud as defined by article 325 of the criminal code.
The High Court concluded that “opening and using an account on a social network open to the public, using as a username the name of another person and introducing real personal data that allows for that person’s identification” meets the requirements to be considered as digital fraud.
The Brasov court referred the case to the High Court because other Romanian courts had previously reached different and contradictory conclusions in similar cases.
BIRD Community
Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?
We created BIRD Community, a place where you can have it all!