Facebook, Twitter Struggling in Fight against Balkan Content Violations

Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.

Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.

“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.” 

Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.

Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.

But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.

The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.

Among BIRN’s findings are the following:

  • Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
  • One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
  • Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
  • Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
  • The survey showed that people were more ready to report content targeting them or minority groups.

Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints. 

The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.

Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.

Half of the posts containing hate speech remain online


Graphic: BIRN/Igor Vujcic

Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.

In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.

In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.

Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.

The debate over the benefits and the dangers of social media for 21st century society is well known.

In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators. 

There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.

In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.

“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”

Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.

“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.

“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”

In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct. 

The survey asked for the language of the disputed content, who was the target and who was the author, and whether or not the report was successful.

Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence. 

The survey showed that people were more ready to report content targeting them or minority groups.

According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases. 

Roughly half of reports of targeted harassment resulted in removal.

Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous. 

“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”

 Content removal – censorship or corrective?


Graphic: BIRN/Igor Vujcic.

According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules. 

On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.

In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.  

As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.

One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content. 

“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”

Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.

Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.

Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.

“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”

Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.

It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.

BIRN methodology 

BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.

The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.

“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said. 

Social networks often remove content themselves, in what they call a ‘proactive approach’. 

According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.

“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”

“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”

There is no available data, however, when it comes to content in a specific language or country.

Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”

Whatever the tools, the results are sometimes highly questionable.

In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s. 

Facebook determined that Bursac’s post had violated “community standards,” local media reported.

Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.

“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”

Lack of language sensibility

Illustration. Photo: Unsplash/The Average Tech Guy

SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.

“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.   

Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”

“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”

The European Parliament has also stressed the importance of a combined assessment.

“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. 

“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”

Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.

“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.

“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”

Failing in non-English

Illustration. Photo: Unsplash/Ann Ann

Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”

“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”

For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.” 

“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.

The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.

“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.

The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.

“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said. 

“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”

Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked. 

“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.

Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.

 Machine learning

As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines. 

The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.

When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”. 

While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
 
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”

According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.

In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category. 

Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”

“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.

Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.” 

“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN. 

“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”

Online Petition Urging Netflix to Recognise Kosovo Gains Momentum

An online petition calling on the US online streaming platform Netflix to recognise Kosovo as separate from Serbia has received over 23,000 signatures since its launch on Tuesday and caught public attention in the country.

“When Kosovars log in on Netflix, their location appears as if they were in Serbia, even though they use Netflix from the Republic of Kosovo,” the organiser, Sovran Hoti, wrote in the petition.

Kosovo-based Netflix users “cannot even verify their phone number because Kosovo does not appear on the list of countries with their phone entries,” he added.

“Netflix is a US company, and since the US recognises Kosovo, shouldn’t Netflix add Kosovo as a country on their streaming service as well?” he asked.

Kosovo declared its independence from Serbia in February 17, 2008 but its statehood remains contested.

The country has been recognised by more than 110 countries so far but Serbia has vowed never to recognise it and is supported in this by powerful allies, including Russia and China. Five EU member states have also withheld diplomatic recognition.

“The propaganda by Serbia to undermine Kosovo in the international arena continues, especially in major online platforms. This needs to change. Serbia has no jurisdiction in Kosovo and its institutions, and our independence cannot be undermined by them,” Hoti said in his petition.

The petition has caught the attention of leading politicians and cultural figures in Kosovo itself. Acting President Vjosa Osmani went on Twitter to support the initiative.

Kosovo citizens “deserve to be recognised for this [streaming Netflix]. It’s about time you put the Republic of Kosovo on your map”, she said, referencing the company directly.

Other personalities such as former Deputy Prime Minister Haki Abazi and the co-founder of the Prishtina Film Festival, Fatos Berisha, have also joined the call.

Greek Police Accused of Violence at Education Bill Protests

Police in Greece have been criticised after videos circulated on social media of officers violently pushing and shoving photojournalists covering a protest against a new bill for universities on Wednesday.

The photojournalists’ union said riot police beat up a member of the union who had been reporting on the protests.

It added that one day before, police tripped up photojournalists covering another protest,­ this time in support of Dimitris Koufontinas – another union member, jailed on November 17 last year – now on hunger strike demanding transfer to another prison.

The new bill among other things allows police to maintain a presence on university campuses. A law withdrawn in 2019 long prohibited police from entering university grounds in Greece, in memory of those killed in 1973 when the military regime violently crushed an uprising at the Athens Polytechnic.

Niki Kerameus, the Education Minister, says the problem of security on Greek campuses has become acute and current lawlessness is forcing Greek students to study abroad.

Outside the Greek parliament, during the debate on the bill, a group of some 200 people, drawn from the main protest of some 5,000 protesters, clashed with riot police, who used tear gas to disperse them. Police took 52 individuals into custody.

Konstantinos Zilos, a photojournalist covering that protest, complained to BIRN of the police’s “dangerous repression” of citizens and media professionals.

Besides the incident involving the beaten-up photojournalist, he added, “the police a number of times have prevented our work, cutting our access without reason and blocking our cameras with their hands or bodies”.

Alexandra Tanka, a reporter for in.gr, told BIRN that a 21-year-old photography student “was surrounded in a glimpse of an eye by the riot shields and suddenly cut off from his colleagues”.


University students clash with riot police in front of the Greek parliament, during a protest against the new draft bill on higher education in central Athens, Greece, 2021. Photo: EPA-EFE/YANNIS KOLESIDIS

The immediate intervention of photojournalists and reporters resulted in the police letting him free. “A photojournalist asked them why they were not arresting that person who seemed to also be a photographer, pointing to a policeman holding a camera recording the demonstration,” Tanka recalled.

But not everyone was as lucky as the photography student, she said. “Students were beaten up and had to spend the night behind bars. According to reports, a girl was beaten up so badly that she was injured in the head and had to be hospitalized to get stiches.”

Nikos Markatos, former dean of the National Technical University of Athens, told the private radio station Real FM that police “were jumping on pavements with their motorbikes” and that one of these motorbikes had injured a girl, sending her to hospital – “the same hospital as my son, who was pushed, fell down and twisted his shoulder”.

Markatos said a third student who was hit on the chin with a fire extinguisher by a police officer at the protest, breaking his chin bone and some teeth, was sent to the same hospital.

Pictures shared on social media showed police violently attacking the protesters, sometimes hitting them after they had already been arrested.

Mera25, the party of former government minister Yanis Varoufakis, said Sofia Sakorafa, an MP for the party and vice-president of the Greek parliament, was also attacked by riot police outside police headquarters in Attica, where she was present when protesters were brought there on Wednesday evening.

The photojournalists’ union condemned attacks on journalists by police, saying that this was tending to become “a habit” and adding that the government had “a duty to inform us if freedom of press still exists”.

On January 21, the Minister of Citizens’ Protection, Michalis Chrisochoidis, presented new national guidelines for policing demonstrations.

According to these rules, journalists covering protests now have to do their work from a certain area specified by the authorities, with the minister adding this was being imposed to protect the journalists themselves.

However, rights groups disagree. On February 2, the international Paris-based media watchdog Reporters Without Borders, RSF, in a report, warned that the new guidelines in Greece were “likely to restrict the media’s reporting and access to information”.

Commenting on the new guidelines, the former head of the photojournalists’ union, Marios Lolos, said that “in 99 per cent of such cases”, attacks on photojournalists covering protests do not come from protesters “but from the police”.

Independent Radio Silenced in Hungary

Hungary’s last independent radio broadcaster Klubradio lost its battle to stay on the air on Tuesday, as the Metropolitan Court of Budapest confirmed the decision of the government-controlled Media Council not to renew its licence, meaning the radio will be forced to move online from February 14.

The move is seen as the latest step to curb critical voices in the Hungarian media by the autocratic government of Viktor Orban, which since coming to power in 2010 has set about co-opting or killing off critical media outlets, shrewdly concealing most as neutral business decisions. This has drawn sharp criticism from the European Union and media freedom watchdogs.

Klubradio has long been in the crosshairs of Viktor Orban’s ruling Fidesz party. The last time its licences had to be renewed, it had to battle for two years through the courts.

Due to its critical tone, the radio does not receive any state advertising and so largely survives on donations from its listeners. It has a loyal audience of around 200,000, mostly in Budapest, as it can only be heard in the vicinity of the capital after being systematically stripped of its frequencies in the countryside, leaving Hungarians outside of Budapest with no independent radio to listen to.

Klubradio’s licence expires at midnight on February 14 and its journalists have been doing “survival exercises” in the last few weeks to train their largely elderly audience to switch to the radio’s online platform.

Klubradio called the verdict a political, not a legal, one. Andras Arato, president of the broadcaster, told Media1 that the verdict encapsulates the sad state of the rule of law in Hungary, which is such that a radio station can be silenced based on fabricated reasons.

Arato said it would challenge the verdict at the Supreme Court, while the CEO of the broadcaster, Richard Stock, would not rule out taking the case to the Court of Justice of the EU.

Opposition politicians slammed the government for yet another blatant move to restrict media freedom in Hungary. The chairman of the Democratic Coalition, former prime minister Ferenc Gyurcsany, posted a quote from Orban in 2018 telling the European Parliament in Strasbourg that, “we would never dare to silence those who disagree with us”.

Gyurcsany retorted: “This government prefers silence – we have to end this paranoid system to regain free speech.”

The head of the International Press Institute (IPI), Scott Griffen, said before the court’s decision that, “these efforts by the Fidesz-controlled Media Council to block Klubradio’s license renewal are part of a far wider and calculated attempt to eradicate the station from the airwaves and muzzle one of the few independent media outlets in Hungary.”

Russian Peacekeepers Detain Moldovan Journalists Near Transnistria

Two Moldovan journalists working for the TV8 television station, Viorica Tataru and Andrei Captarenco, were stopped on Tuesday by Russian and Transnistrian separatist peacekeeping troops at the Gura Bacului checkpoint and ordered to erase all the footage they had filmed and surrender their technical equipment.

“We are here at the peacekeepers’ post. We have been detained, the car has been seized, and we cannot get out of the vehicle. They told us we have to hand them the material we filmed, otherwise we can’t get out of here,” Tataru told BIRN by telephone.

She added that four armed men with a Russian flag emblem on their uniforms were guarding their car.

She said they had contacted the police and representatives of the Joint Control Commission, a combined military command structure involving Moldova, Transnistria and Russia that has operated in the separatist-controlled territory since the war in the country in 1992.

“We’ve been waiting for the police for an hour. I also called the Joint Control Commission. They said they would come, but so far no one has come,” Tataru said.

The two journalists have been filming a weekly TV show for more than a year in villages that are controlled by Moldova but are located on the eastern bank of the Dniester river – in an area that is mostly controlled by the breakaway Transnistrian authorities.

To reach the villages, the journalists have to pass through the Gura Bacului checkpoint.

Transnistria does not allow its checkpoints to be filmed or photographed.

This is not the first time that the two Moldovan journalists have accused the Russian and Transnistrian peacekeepers of targeting them.

In July 2020, peacekeeping troops chased them into Moldovan controlled-territory and asked them to surrender footage they shot on Moldovan soil.

The Transnistrian ‘frozen conflict’ has seen no armed violence between government forces and Russian-backed separatists since 1992. The de facto border has remained open, and populations on both sides of the river have come to depend on each other economically.

Turkey Detains 39 for ‘Terrorist Propaganda’ Social Media Posts

The Turkish Interior Ministry announced on Tuesday that security forces detained 39 social media users in the first week of February for allegedly posting propaganda for terrorist organisations online.

It said that a total of 575 offenders have been detected and that detentions continue.

“Debates and developments on social media platforms as well as the social media accounts of illegal groups and structures are being followed closely,” the ministry said in a written statement.

The detainees are accused of propaganda for organisations which Turkey accepted as terrorist organisations, including the outlawed Kurdistan Workers’ Party, PKK, the so-called Islamic State, extremist leftist groups and the so-called Fethullahist Terrorist Organisation – a name Turkey uses to brand followers of exiled Turkish preacher Fethullah Gulen, who Ankara accuses of orchestrating a failed coup attempt in 2016.

The 39 detainees include several students who allegedly run social media accounts to organise the recent series of high-profile protests against the political appointment of a new rector at the prestigious Bogazici University in Istanbul.

Riot police staged a major operation to disperse the student protesters last week, with hundreds detained and dozens charged.

Aysen Sahin, an independent Turkish journalist, was also detained by police at her home on Monday evening for posting a message on Twitter during last week’s student protests.

Sahin was detained after some pro-government newspapers criticised her. She was released on Tuesday morning.

The Turkish government’s crackdown on social media users intensified after it introduced a new law on digital media last year.

The new law allows security forces to detain anyone responsible for suspicious posts which are linked to terrorist organisations or any kind of disinformation.

As part of the new law, social media platforms are forced to appoint legal representatives in the country to answer the government’s demands to delete social media posts and close accounts.

YouTube, Facebook, Instagram, TikTok and Russia’s VK social media platform decided to appoint representatives after Turkish government fined them twice. Twitter, however, is still resisting the Turkish government’s new regulations.

According to the Turkish Interior Ministry, 14,186 social media accounts were investigated and 6,743 people were tried because of their posts on social media in first eight months of 2020.

No Quick Fix to North Macedonia Telegram Scandal

Authorities in North Macedonia face an uphill battle to confront the dangers of online harassment, experts warn, following a public outcry over the reappearance of a group on the encrypted messaging app Telegram in which thousands of users were sharing explicit pictures and videos of women and girls, some of them minors.

The group, known as ‘Public Room’, was first shut down in January 2020, only to re-emerge a year later before it was closed again on January 29. Reports say new groups have since popped up, their membership spreading to neighbouring Serbia.

Authorities in the Balkan state have mooted the possibility of banning Telegram altogether and criminalising the act of stalking, making it punishable with a prison sentence of up to three years.

The case, however, has exposed the many layers that authorities need to address when it comes to preventing online harassment and sexual violence. And experts in the field say it will not be easy.

“This type of danger is very difficult to handle, given that many countries in the world have had the same or similar problems as North Macedonia,” said Suad Seferi, a cybersecurity analyst and head of the IT sector at the International Balkan University in Skopje.

Seferi cited blocks on Telegram in countries such as Azerbaijan, Bahrain, Belarus and China, but cautioned against following such a route given the risk of it being construed as censorship by those using the app for its primary purpose of simple communication.

“The government could try and reach an agreement, or communicate with Telegram to close specific channels or seek cooperation in prosecuting the perpetrators of such acts,” he told BIRN.

Law not being applied


An image showing the Telegram messenger app. Photo: EPA-EFE/MAURITZ ANTIN

The phenomenon has triggered heated debate in North Macedonia; a number of victims have spoken out publicly about how some of the 7,000 users of Public Room shared explicit, private photos of them or took pictures from their social media profiles and shared them alongside the names and phone numbers of the victims.

One of them, 28 year-old Ana Koleva, met Justice Minister Bojan Maricic over the weekend to discuss her own harrowing experience after her pictures began circulating in the Telegram group and elsewhere and she was bombarded with unwanted messages and phone calls.

Some victims, including Koleva, said they appealed to the police for help but were bluntly dismissed.  One reason given by police was that they were unable to act unless the victim was a minor.

Critics say the group’s re-emergence exposes the failure of authorities to stamp it out in the first place.

“The ‘Public Room’ case revealed the inertia and inability of the authorities to act in such cases of violence and harassment of women and girls,” said Skopje-based gender expert Natasha Dimitrovska. “Although there are laws, they are not implemented.”

North Macedonia’s law on prevention and protection from violence against women and domestic violence also defines sexual harassment and especially online sexual harassment.

“This is in line with the Istanbul Convention, which states that all forms of sexual violence and harassment should be sanctioned,” said Dimitrovska. “In addition, endangering someone’s security and sharing and collecting other people’s personal data without permission are crimes that are already regulated by the Criminal Code.”

She told BIRN that it was imperative that authorities grasp the fact that whatever goes on online has repercussions offline.

“There is no longer a division between offline and online,” she said. “What happens online also has profound consequences in real life. Girls and women who are sexually harassed online are also restricted from accessing and expressing themselves freely in public.”

“What’s even worse is that everything that is online remains there forever and is widely available, so with online harassment it’s even more frightening in the sense that it will remain there for a long time and haunt the victim.”

‘Scary viral dimensions’


Illustration. Photo: Markus Spiske/Unsplash

Cybersecurity experts caution that it is extremely difficult to control or monitor content on platforms such as Telegram, which has become notorious for similar scandals.

In the US and the UK, there are laws against ‘revenge porn’, in which people share explicit pictures of their former partners as a form a retaliation. Six years ago, only three US states has such laws in place. They have since spread to at least 46.

Privacy and data protection expert Ljubica Pendaroska said some public ‘supergroups’ can have up to 200,000 members, which massively increases the chances of privacy violations.

“Usually, in the communication in such groups, the spectrum of personal data related to the victims is supplemented with address of residence, telephone number, information about family members, etc, Pendaroska told BIRN.

“So the invasion of privacy gets bigger and usually goes out of the group and the network, taking on really scary viral dimensions.”

Importance of raising public awareness

To combat such acts, experts advocate raising public awareness about privacy and how to protect it – particularly among parents and children – and punishing violations in a timely manner.

“From experience, young people know a lot about the so-called technical aspects, capabilities and impacts of social networks and applications, but little about their privacy and especially the potential social implications of what is shared in the online world,” said Pendaroska, who also serves as president of Women4Cyber ​​North Macedonia, an initiative to support the participation of women in the field of cybersecurity.

“Our concept is to avoid occasional action but commit to consistent and continuous education of women about the potential risks that lurk in the online world,” she told BIRN, “because that’s the only way to achieve long-term results and to raise awareness.”

“Therefore, our plan is within each project or activity that we implement, to include exactly that component – through various activities and tools to educate women, because awareness is key.”

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now