EU Observers Say Kosovo Voters Misled by ‘Opaque’ Facebook Pages

The EU Election Observation Mission said on Tuesday that non-transparent Facebook pages were responsible for “manipulative interference” in Sunday’s mayoral election run-off contests, spreading misinformation about rival parties and candidates, although the polls were well-organised.

“While candidates shared useful information through online platforms, opaque Facebook pages were used to spread misleading content hampering the voters’ ability to form opinions free from manipulative interference,” the Election Observation Mission said in a preliminary assessment of the conduct of the vote.

“Candidates generally used advertisements to promote their campaign platforms but third-party ads were largely used to discredit contestants, including with personal accusations,” the statement added.

The head of the mission, Lucas Mandl, who is member of the European Parliament, told media in Pristina that in general, the run-off elections were “well administered and competitive”.

“The campaign was vivid and peaceful, though its tone was harsher compared to the first round. However, in the absence of sanctions for campaigning outside of the official five-day period, most candidates were canvassing long before the official campaign kicked off,” Mandl said.

The preliminary statement also said that blatant lack of transparency related to the financing of contestants’ campaigns persisted in the second round.

“Perpetuating the low enforcement of campaign finance rules, the Kosovo Assembly is unable to guarantee timely audit of the disclosure reports and the CEC [Central Election Commission] did not sufficiently support the implementation of applicable regulations,” the statement said.

The EU mission said that in the absence of sanctions for campaigning outside of the official five-day period, “most candidates were canvassing long before the official campaign kicked off”.

“Candidate rallies were attended by leaders of the major parties, including by Prime Minister Albin Kurti and his ministers while LVV [ruling Vetevendosje party] candidates often portrayed themselves as the guarantors of projects financed from the central budget. Moreover, between the two rounds, the government announced a temporary increase of social benefits which led to opposition’s accusations of indirect vote buying,” it said.

Voters in 21 out of 38 municipalities went to the polls to elect new mayors in a run-off vote which was held four weeks after 17 mayors were elected in the first round.

The election result produced a disappointment for Vetevendosje, which won only four of the 12 municipalities in which it was competing, and lost in the capital Pristina.

Belgrade-backed Serb party Srpska Lista won the most number of municipalities (ten) followed by the Democratic Party of Kosovo (nine), the Democratic League of Kosovo (seven), the Alliance for the Future of Kosovo (five) and the Social Democratic Initiative Nisma (one).

New North Macedonia Online Sex Abuse Scandal Targets Roma Women

North Maceodnian police on Monday said they were aware of a case in which human rights activists have alerted that explicit photos and videos of Roma girls and women are being posted on a Facebook group, and are “working to apprehending the persons responsible”.

The police said that they are also working on removing the explicit online content, adding that instances of online sexual abuse have increased over the past two years, since the start of the global COVID-19 pandemic.

The case was first reported in the media over the weekend.

“The posts contain private photos and videos of Roma women living in the Republic of North Macedonia but also outside our country,” the Suto Orizari Women’s Initiative, a human rights group from Skopje’s mainly Roma municipality of Shuto Orizari, said on Sunday.

“All the posts on the Facebook page are provoking an avalanche of harassing comments and hate speech from individuals, as well as calls for a public lynching, which violates the dignity of the Roma women whose photos have been published,” it added.

The organisation said the Facebook fan page has been active for some two months, since August 21, and has over 1,600 followers.

Reportedly the Facebook page also contains calls for collecting more photos and videos in order to post them and expose “dishonest women”, along with teases from the page administrators, who ask members whether they would like to see uncensored videos.

The Facebook page was also reported to the authorities last Friday by CIVIL – Center for Freedom, a prominent human rights NGO.

“CIVIL condemns this gruesome act and reported this Facebook page on Friday to the Interior Ministry’s department for computer crime and digital forensics, and to the Public Prosecution, so that they immediately take measures, shut down this page, and uncover and accordingly punish the administrators,” CIVIL said on Sunday.

The recent case was reminiscent of the earlier so-called “Public Room” affair. A group known as Public Room, with some 7,000 users, shared explicit pornographic content on the social network Telegram. It was first shut down in January 2020 after a public outcry, only to re-emerge a year later, before it was closed again on January 29.

The group shared explicit content of often under-age girls.

In April this year, the prosecution charged two persons, a creator and an administrator of Public Room, with producing and distributing child pornography.

These occurrences sparked a wide debate, as well as criticism of the authorities for being too slow to react to such cases and curb online harassment and violence against women and girls.

This prompted authorities to earlier this year to promise changes to the penal code, to precise that the crime of “stalking”, which is punishable with up to three years in jail, can also involve abuse of victims’ personal data online.

North Macedonia’s Interior Ministry has meanwhile launched a campaign to raise awareness about privacy protection and against online sexual misuse, called “Say no”.

Facebook Reveals Cost of Albanian Parties’ and Candidates’ Election Ads

Facebook has for the first time published a report on what Albanian parties have spent on online advertising on the social media giant during the current parliamentary election campaign.

According to the report, the biggest parties predictably spent more money in sponsored posts for political content than the others.

The biggest advertiser was the ruling Socialist Party, which so far posted 394 ads costing 21,907 US dollars, followed by the main opposition centre-right Democratic Party, which spent 11,536 dollars on 81 ads.

According to BIRN’s calculations, 123,152 dollars was spent on political and social advertising by the parties in all. However, its data show that only 113,252 dollars were actually spent by parties, candidates or sites that distribute political ads. The rest of the ads were from the media and from companies that have been wrongly categorized by Facebook’s algorithm.

Gent Progni a web developer, told BIRN that the total amount spent by each party on FB seems quite low for a target audience of two million people, but the figures change if every candidate or other Facebook page campaigning for a political party is counted.

“The amounts are small for an audience of two million Albanians, looked at from the official websites of the parties. But if you see each candidate in particular, or sites that have just opened and are campaigning for parties, we have a completely different reflection of the amount, which increases several times,” he told BIRN.

According to BIRN, many newly opened Facebook pages are spreading political advertising but their expenses, which are high, are not connected always to parties or political candidates.

Facebook asked Albania to be transparent with political advertising in March during the election campaign by including “paid for by…” in sponsored posts.

Political expert Afrim Krasniqi, head of the Albanian Institute of Political Studies, a think tank based in Tirana, told BIRN that this is the first time an Albanian election campaign is being held more virtually than physically.

“This is why parties and candidates are using social media as the cheapest and fastest source of communication,” he told BIRN.

He added that Albania lacks a strong regulatory legal basis or control mechanism concerning the finances of election campaigns or party propaganda on social media.

A law, “On Political Parties”, only obliges parties to be transparent about their financial resources, while the Central Election Commission is responsible for monitoring and auditing the finances of political parties.

In Offering ‘Hate-Free’ Social Media, Old Worries Haunt New Apps

For years, London-based writer and artist Mary Morgan has used social media to raise awareness and engage in debate, particularly Twitter and Instagram. Until the hate speech became too much.

“Anyone who spends time on Twitter knows it can be an absolutely horrible place,” said Morgan, whose work focuses on body politics, or the practices and policies through which powers of society regulate the human body.

So she began exploring alternative apps, settling on Clubhouse, an audio-based social network where users can join rooms and listen to, participate or moderate discussions on any topic that might interest them.

“The power of Clubhouse is that you can choose who you speak to. You can’t just randomly start messaging people with hate. I think that’s a real power to the platform,” Morgan told BIRN.

“Especially when it comes to activism, like-minded individuals and people who want to participate and learn will be drawn to houses and clubs, meaning we can all speak to and learn from each other in an environment that is encouraging of that.”

Launched in April 2020, Clubhouse currently boasts more than eight million users worldwide.

And it’s not alone in winning new users turned off by the inability of social media giants to find a way to filter out offensive content on their platforms – Mastodon, MeWe and CloutHub are just a few of the emerging names benefitting from a backlash against the likes of Twitter, Facebook and Instagram.

Experts, however, warn that while these alternative apps might be motivated by high ideals, they face the same issues that have dogged the giants – how to provide transparency, avoid hate speech and protect privacy, while also making money.

“I understand the desire that people have to move to new platforms,” said Ayden Ferdeline, a Berlin-based public interest technologist.

“We desperately need more spaces for lawful speech, but we need these new platforms to be more transparent than Facebook or Twitter are, about how they operationalise their policies and procedures, and to be designed from day one to uphold and respect fundamental human rights.”

Turned off Twitter


 A mobile phone displays the suspended status of the Twitter account of Donald J. Trump, 2021. Photo: EPA-EFE/MICHAEL REYNOLDS

Skopje-born Katarina P. spent 11 years on Twitter as an active member of the Twitter community in North Macedonia. Under an alias, Katarina, who declined to give her surname, used her profile to follow and comment on the events of any day in her home country and the wider Balkan region, engaging in sometimes heated debates.

Then, in February this year, Twitter suspended Katarina’s profile, without any specific explanation.

“I believe I got suspended because I came into a conflict with another Twitter account that was promoting misogyny through quasi-Christian Orthodox theology,” Katarina recalled.  “After my impulsive reaction to these tweets, my account got suspended.”

She appealed to Twitter’s Support Team, but, after a generic response to say they would look into it her case, Katarina never heard back. She assumed she was shut down based on the complaints of “religious fanatics”, and was frustrated at the lack of communication from Twitter.

Stung by criticism over how social media was used in the storming of the US Capitol by Donald Trump supporters on January 6, Twitter and Facebook appear to have adopted more restrictive approaches to what can and cannot be posted on their platforms.

Experts, however, say that the use of Artificial Intelligence has resulted in a litany of errors, with AI lacking the required contextual and nuanced analysis to distinguish strong criticism from defamation and radical political opinions from expressions of hatred and racism or indictment to violence, particularly in languages spoken by far fewer people than, say, English, French, German or Russian.

Katarina believes she is a victim of this, and is already looking for an alternative platform where she can engage in debate.

“I liked Twitter since it was unique for the microblogging opportunities it offered,” she said. “I hope that a new network with similar content might appear soon. And I won’t lie when I say that I am looking forward to it.”

Friendlier Facebooks


A Facebook logo. Photo: EPA-EFE/Julien de Rosa

Clubhouse might still be in its beta stage, but it has attracted huge attention.

“It is a completely new and different app, and I see it as great replacement for all of the podcasts, with the addition that you can not only listen to them, but also actively engage in the discussion,” said Marija Andrejska, a digital marketing specialist in Skopje who began using Clubhouse this year. “I believe this is one fantastic feature that has never been seen before.”

The app has an air of exclusivity to it that not everyone likes, however. As an invite-only app, a new user has to be invited by an existing member to join, and it’s only available for those using iPhones.

“On the downside, I think it’s a pretty ‘elitist’ app, and I don’t like that,” Andrejska told BIRN. “Since it functions only by invites and it’s only for iPhone users, it can create closed, in a way segregated groups, which can be dangerous in the long run. Therefore, I think that you cannot really use Clubhouse with the same intensity as Facebook, Instagram, Twitter, or even TikTok.”

Turkish journalist Dilek Kutuk said Clubhouse was a great place to exchange ideas, “especially in Turkey”, where society is deeply polarised along political lines.

“I see many voice chat rooms in Clubhouse where people talk and share their opinions, as opposed to Twitter, where all you can see is fights between those that have different political opinions,” Kutuk told BIRN.

MeWe has also seen a recent spike in user numbers, particularly in January.

Launched in 2012, the network says it is built on “trust, control and love”, and represents a secure and private alternative to Facebook with more than 16 million worldwide logging in to it use its newsfeed, private text and video chats and groups.

Then there’s CloutHub, considered an alternative to Facebook and Twitter. With some 255,000 users, CloutHub describes itself as a “non-biased social network for people engaged in meaningful civic, social and political issues.” It has also seen a growth in its user base since the beginning of the year.

Mastodon, a decentralised open-source platform,  is another under-the-radar alternative to Twitter. Users say it offers much better tools to protect privacy and fight online harassment than Twitter. Launched in 2016, the platform has more than two million users worldwide, and has been billed as a model for a “friendlier social network” dedicated to keeping out hateful content.

New apps under scrutiny


Illustration. Photo: Unsplash/Freestocks

But while such apps might bill themselves as ‘friendlier’, hate-free alternatives to Twitter and Facebook, experts say they face the same questions regarding privacy, transparency and how they moderate what’s being said, written or posted on their platforms.

“It is understandable why tech companies want to cleanse their platforms of mis- and dis- information, but neither their human moderators nor their technical measures are able to do so in a an accurate and human rights-respecting manner,” said Ferdeline.

Marcin de Kaminski, security and innovation director of Civil Rights Defenders, a Sweden-based international human rights organisation, said there is already concern.

“From our perspective, Clubhouse does allow users to speak freely, yes. But they also compromise on their users’ privacy, and there are no safeguards when it comes to protecting users from marginalised or targeted communities when it comes to verbal attacks, threats or slander,” said De Kaminski.

“Users that choose to use Clubhouse need to understand the risks, both technical and socio-legal.”

He warned of being blinded by the novelty of new features.

“It is easy to get mesmerized by new fascinating features and the possibility to have seamless voice chats with friends and colleagues may be tempting during the isolation of the ongoing pandemic,” De Kaminski told BIRN.

“However, Clubhouse has really made it possible to ask oneself very important questions – which data is harvested when you use the service and who has access to that data?”

Nor does being the new kid on the block necessarily protect against cyber attacks.

On Saturday, a report said that the personal data of 1.3 million Clubhouse users had been posted online on a popular hacker forum. Clubhouse denied being hacked and said that the data “is all public profile information from our app, which anyone can access via the app or out API [application programming interface].”

Privacy concerns have already prompted many users to migrate from messaging apps such as WhatsApp to the likes of Signal or Telegram, which claim to offer better privacy features.

Privacy and data protection strategist Lourdes M. Turrecha said any privacy failures could cost social media startups big.

“These privacy concerns are serious enough to create trouble for Clubhouse in a world where data protection enforcements have teeth – note the recent $650 million class action settlement following the $5 billion Federal Trade Commission’s fine against social media predecessor, Facebook,” said Turrecha.

“While these figures may seem like slaps on the wrist for a company like Facebook, a pre-revenue startup like Clubhouse doesn’t have the war chest to chalk these up as a cost of doing business, despite its $110 million in funding.”

Turrecha warned of the risks of users “trading” their privacy for greater freedom of speech.

“While neither speech nor privacy rights are absolute, I caution against pitting the two against each other through false tradeoffs,” she said. “We should demand technologies that protect both speech and privacy rights.”

In COVID-19 Fight, Free Speech Becomes Collateral Damage

At first, journalist Tugay Can had no idea why he had been taken in for police questioning on March 25 last year in the Turkish port city of Izmir. Then cybercrime officers told him he was suspected of spreading fear and panic because of a report he wrote, published two days earlier, about COVID-19 outbreaks in two community health centres in the city that were subsequently quarantined.

“After I confirmed it with my sources, I reported the situation”, Can, who at the time worked for the local Izmir newspaper Iz Gazete, told BIRN.

Pressed to name his sources, Can refused. Hours of questioning resulted in a charge of spreading fake news and causing panic. The case was dropped several months later, but Can’s chilling experience was far from a one-off. 

According to the media rights watchdog Reporters Without Borders, Can was among 10 Turkish editors and reporters interrogated just in March of last year concerning their coverage of the pandemic that had just begun. 

“Governments are using the pandemic as an advantage over freedom speech,” Can said.

Turkey is well-known for its jailing of journalists, but it was not the only country in the region to employ draconian tools to control the pandemic narrative. Nor have journalists been the only targets.

BIRN has confirmed dozens of cases  in which regular citizens have faced charges of causing panic on social media or in person. There are indications the true number of cases runs into the hundreds.

Whether dealing with accurate but perhaps unflattering news reports or with what the World Health Organisation called last year an “infodemic” of false information, governments have not hesitated to turn to social media giants to get hold of the information that could help them track down those deemed to be breaking the rules.

“Every government has a duty to promote reliable information and correct harmful and untrue allegations in order to protect the personal integrity and trust of citizens,” said Tea Gorjanc Prelevic, head of the Montenegrin NGO Human Rights Action.

“But any measure taken to combat misinformation should not violate the fundamental right to expression.”

Internet sites shut down

Illustration: Unsplash.com

Battling an invisible enemy, governments across the region have sought to restrict information while cracking down on media reporting or social media posts that deviate from the official narrative. ‘Misinformation’ has been criminalised.

Some of these restrictions were part of the states of emergency that were declared; others were introduced with new legislation that outlasts any temporary emergency decrees.

But who draws the line between the right to free speech and the need to preserve public order?

In its November 2020 COVID and Free Speech report, the Council of Europe rights body cautioned that “crisis situations should not be used as a pretext for restricting the public’s access to information or clamping down on critics.” 

But that’s precisely what has happened in some countries.

In Hungary, the Penal Code was amended to criminalise the dissemination of “false or distorted facts capable of hindering or obstructing the efficiency of the protection efforts” for the duration of a state of emergency, first between March and June and again since November.

Parliament subsequently passed a bill making it easier for governments to declare such emergencies in future. In March, the government introduced punishments of one to five years in prison for spreading “falsehoods” or “distorted truth” deemed to obstruct efforts to combat the pandemic. 

Similar restrictions were imposed in Bosnia’s mainly Serb-populated Republika Srpska entity and in Romania. 

In Bucharest, the government closed down a dozen news sites for promoting false information concerning the pandemic.

The Centre for Independent Journalism, CJI, an NGO that promotes media freedom and good journalistic practices, has raised concern that provisions enacted as part of a state of emergency between mid-March and mid-May 2020 to combat the spread of the novel coronavirus in Romania could hamper the ability of journalists to inform the public.

“The most worrying aspect of all this is, from my perspective, the limitations to the access to information of public interest,” said CJI executive director Cristina Lupu.

“The lack of transparency of the authorities is a very bad sign and the biggest problem our media faces now,” Lupu told BIRN, lamenting the fact it left the public without “access to timely information.”

In March 2020, the Organisation for Security and Cooperation in Europe, OSCE, raised concern about what it said was the “removal of reports and entire websites, without providing appeal or redress mechanisms” in Romania.

The Venice Commission, the CoE’s advisory body on constitutional affairs, stressed that even in emergency situations, exceptions to freedom of expression must be narrowly construed and subject to parliamentary control to ensure that the free flow of information is not excessively impeded. 

“It is doubtful whether restrictions on publishing “false” information about a disease that is still being studied can be in line with the [Venice Commission] requirement unless it concerns blatantly false or outright dangerous assertions,” it said.

Instead of prevention, fines and prison terms

Early on in the pandemic, the Republika Srpska government issued a decree allowing it to introduce punitive measures, including fines, for spreading ‘fake news’ about the virus in the media and on social networks during the state of emergency.

According to the decree, anyone using social or traditional media to spread ‘fake news’ and cause panic or public disorder faced possible fines of between 500 and 1,500 euros for private individuals and 1,500 and 4,500 euros for companies or organisations. It is not known how many people have been fined. The decree was dismissed in April.

In Montenegro, Article 398 of the Criminal Code, introduced in 2013, foresees a fine or a prison sentence of up to 12 months for the spreading of false news or allegations which cause panic or serious disturbances of public order or peace. For journalists, the punishment runs to three years in prison. The law was hardly used until protests erupted at the end of 2019 over a controversial religious freedom law.

In July 2019, long before the pandemic, North Macedonia’s government unveiled an action plan to deal with ‘fake news’, and doubled down in March 2020 with a vow to punish anyone deemed to be sharing disinformation about the novel coronavirus.

Skopje-based communications and new media specialist Bojan Kordalov said authorities would be better off focusing on prevention and raising awareness.

“It is necessary to build a system of active and digital transparency, as well as to create a real strategy for fast and efficient two-way communication of institutions with citizens and the media, which means highly-trained and prepared staff for 24-hour monitoring and publication of official and credible information to the public,” Kordalov told BIRN.

In Turkey, media censorship, particularly of online outlets, has increased since the onset of the pandemic, according to a report published in November by the Journalists’ Association of Turkey.

According to the report, between July and September 2020 alone, RTUK, the state agency for monitoring, regulating and sanctioning radio and television broadcasts, issued 90 penalties against independent media, including halts to broadcasting and administrative fines.

The government also passed several new draconian laws concerning digital rights and civil society organisations, forcing social media companies to appoint legal representatives to respond to government demands, including those requiring the closure of accounts or deleting of social media posts.

It is not known how many people were investigated or arrested under the new measures, but administrative fines during the pandemic totalled roughly one billion Turkish liras, or 115 million euros.

‘Fake news’ arrests

Illustration: Unsplash.com

In North Macedonia, fake news stories shared on social media ranged from a report that a garage was being used as a COVID-19 testing facility to health authorities being accused of negligence that led to the death of two sisters from COVID-19 complications. One fake story claimed food shortages were imminent.

According to the country’s Ministry of Interior, by September 2020 authorities had acted on a total of 58 cases stemming from the alleged dissemination of fake news related to COVID-19. Thirty-one cases were forwarded to prosecutors and criminal charges have been pressed in three, a ministry spokesman told BIRN.

In Serbia, the penalty for the crime of causing disorder and panic is imprisonment for between three months and three years, as well as a fine. According to Serbian Interior Ministry, in the first two months of the pandemic dozens of people were charged.

After she broke news about the disarray in the Clinical Centre of Vojvodina, Serbia’s northern province, Nova.rs reporter Ana Lalic was questioned by police and her home was searched.

In neighbouring Montenegro, a heated political row over a disputed law on religions saw some people arrested for spreading panic even before the country confirmed its first case of COVID-19.

BIRN was able to confirm 14 cases in which journalists, editors and members of the public were arrested for causing panic.

Similarly in Turkey, the interior ministry investigated, fined and detained hundreds of people in the first few months of the pandemic over their social media posts. Later, however, the ministry stopped publishing such data.

Critics say the government was determined to muzzle complaints about its handling of the pandemic and the economy.

“Turkey in general has a problem when it comes to freedom of speech,” said Ali Gul, a lawyer and rights activist. “The government increases its pressure because it does not want people to speak about its failures.” Ali Gul.

In Croatia, no journalist has been charged with spreading fake news during the pandemic, but that’s not to say there was not any misleading information.

“Without any hesitation, I can say that, unfortunately, a large number of citizens have been involved in spreading false news,” said Tomislav Levak, a teaching assistant and PhD candidate at the Academy of Art and Culture in the eastern Croatian city of Osijek. “But in my opinion, in most cases, it is actually unintentional because they do not think critically enough.”

The Interior Ministry said that it had registered 40 violations of Article 16 of the Law on Misdemeanors against Public Order and Peace, “which are related to the COVID-19 epidemic”.

Rise in state requests to social media giants

The transparency reports of Facebook and Twitter shed light on the scale of government efforts to find and track accounts suspected of spreading panic.

According to Twitter, in 2020 emergency disclosure requests – when law enforcement bodies seek account information – accounted for roughly one out of every five global information requests submitted to Twitter, increasing by 20 per cent during the reporting period while the aggregate number of accounts specified in these requests increased by 24 per cent.

Turkey accounts for three per cent of all government requests for information from Twitter.

In the first six months of last year, Turkey registered a 160 per cent increase in emergency requests compared to the same period in 2019.

North Macedonia saw a 175 per cent increase.

In terms of removal requests, they multiplied several times over from Serbia, Turkey and Poland.

As for Facebook, Turkey last year submitted 6,171 requests, a threefold increase from 2019. In 4,904 cases, Facebook disclosed data, compared to 1,513 cases in 2019. Poland made 4,572 requests, up from 3,397 in 2019, and received information back in 2,666 cases, compared to 1,902 the previous year.

When it comes to legal process requests – when states ask for account information to aid an investigation – Turkey and Poland lead the region with 6,143 and 4,200 requests respectively, roughly double the numbers in 2019.

Compared to the same period in 2019, Facebook data shows a significant rise in all sorts of requests from most countries in the region.

In terms of preservation requests – when law enforcement bodies ask Facebook to preserve account records that may serve as evidence in legal proceedings – Bosnia and Herzegovina registered an increase of just over 150 per cent. 

Turkey accounts for 3.55 per cent of and Poland 2.63 per cent of all government requests for information from Facebook. 

Lawsuits designed to silence

And if that wasn’t enough, some media faced lawsuits that watchdogs say were designed simply to stop the free flow of information – a so-called SLAPP, or Strategic Lawsuit Against Public Participation, the purpose of which is to censor or intimidate critics by burdening them with the cost of a legal defence.

In Poland, the publisher and journalists of the weekly Newsweek Polska were subjected to a SLAPP for their reporting on Polish clothing company LLP, owner of the Reserved brand, which the weekly said had been sending masks bought in Poland to its factories in China despite a severe shortage in Poland.

The company is seeking damages of €1.37 million, an apology, the removal of articles about LPP published on March 22 and a “ban on disseminating claims that suggest that the company’s position on this matter is untrue.”

The case is ongoing. 

Also in Poland, a court dismissed lawsuits brought against media outlet Wyborcza by Polish KGHM, one of the world’s biggest producers of copper and silver, over stories revealing that the company had paid huge sums of money for worthless masks from China.

In Turkey, a court granted a take-down request by pasta producer Oba Makarna over a report that 26 of its factory workers in the south-central city of Gaziantep had tested positive for COVID-19. According to the court ruling, while the report was true, it damaged the company’s commercial reputation.

In its report, the CoE warned that restrictions introduced during the pandemic could give rise to increased use of civil lawsuits, particularly defamation cases.

While their use did not increase dramatically during the height of the pandemic, there is some concern that pandemic-related reporting will be subjected to SLAPP lawsuits and defamation cases in the future, it said.

Facebook, Twitter Struggling in Fight against Balkan Content Violations

Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.

Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.

“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.” 

Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.

Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.

But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.

The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.

Among BIRN’s findings are the following:

  • Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
  • One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
  • Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
  • Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
  • The survey showed that people were more ready to report content targeting them or minority groups.

Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints. 

The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.

Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.

Half of the posts containing hate speech remain online


Graphic: BIRN/Igor Vujcic

Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.

In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.

In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.

Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.

The debate over the benefits and the dangers of social media for 21st century society is well known.

In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators. 

There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.

In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.

“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”

Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.

“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.

“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”

In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct. 

The survey asked for the language of the disputed content, who was the target and who was the author, and whether or not the report was successful.

Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence. 

The survey showed that people were more ready to report content targeting them or minority groups.

According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases. 

Roughly half of reports of targeted harassment resulted in removal.

Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous. 

“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”

 Content removal – censorship or corrective?


Graphic: BIRN/Igor Vujcic.

According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules. 

On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.

In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.  

As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.

One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content. 

“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”

Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.

Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.

Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.

“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”

Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.

It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.

BIRN methodology 

BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.

The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.

“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said. 

Social networks often remove content themselves, in what they call a ‘proactive approach’. 

According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.

“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”

“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”

There is no available data, however, when it comes to content in a specific language or country.

Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”

Whatever the tools, the results are sometimes highly questionable.

In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s. 

Facebook determined that Bursac’s post had violated “community standards,” local media reported.

Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.

“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”

Lack of language sensibility

Illustration. Photo: Unsplash/The Average Tech Guy

SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.

“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.   

Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”

“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”

The European Parliament has also stressed the importance of a combined assessment.

“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. 

“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”

Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.

“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.

“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”

Failing in non-English

Illustration. Photo: Unsplash/Ann Ann

Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”

“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”

For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.” 

“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.

The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.

“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.

The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.

“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said. 

“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”

Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked. 

“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.

Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.

 Machine learning

As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines. 

The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.

When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”. 

While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
 
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”

According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.

In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category. 

Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”

“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.

Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.” 

“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN. 

“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”

Fakebooks in Hungary and Poland

Poland and Hungary have seen the launch recently of locally developed versions of Facebook, as criticism of the US social media giants grows amid allegations of censorship and the silencing of conservative voices.

The creators behind Hundub in Hungary and Albicla in Poland both cite the dominance of the US social media companies and concern over their impact on free speech as reasons for their launch – a topic which has gained prominence since Facebook, Twitter and Instagram banned Donald Trump for his role in mobilising crowds that stormed the Capitol in Washington DC on January 6. It is notable that both of the new platforms hail from countries with nationalist-populist governments, whose supporters often rail against the power of the major social media platforms and their managers’ alleged anti-conservative bias.

Albicla’s connection to the ruling Law and Justice (PiS) party is explicit. Right-wing activists affiliated with the PiS-friendly weekly Gazeta Polska are behind Albicla, whose name is as obscure to Poles as it is to the international reader, although Ryszard Kapuscinski from the Gazeta Polska team claims it is an amalgamation of the Latin phrase albus aquila, meaning “white eagle”, a Polish national symbol.

The activists say Albicla is a response to the “censorship” of conservative voices by the global internet giants. “We have disturbed the powerful interests and breached the walls of the ideological front that is pushing conservative thinking to the sidelines,” Tomasz Sakiewicz, editor-in-chief of Gazeta Polska, wrote on Thursday, the day after the new portal was launched.

“Not all the functionalities are ready because we wanted to launch the portal in the last hour of the rule of the leader of the free world,” Sakiewicz continued, referring to Trump’s last day in office on January 20. “It is now up to us to ensure this world continues to be free, particularly online.”

Busy bees

The origins of Hundub – forged from the words “Hungarian” and “dub”, which also means “beehive” in ancient Hungarian – are less clear. Until recently, Hundub was owned by Murmurati Ltd, an offshore company registered in Belize, but it pulled out last week and Hundub’s founder, Csaba Pal, announced it would be crowdfunded from now on.

The December 6 launch of Hundub received little attention until the government-loyal Magyar Nemzet began acclaiming it as a truly Hungarian and censorship-free alternative to Facebook, which, the paper argues, treats Hungarian government politicians unfairly. Prime Minister Viktor Orban was one of the first politicians to sign up to Hundub, but all political parties have rushed to register, starting with the liberal-centrist Momentum, the party most favoured by young people.

Pal – a previously unknown entrepreneur from the eastern Hungarian city of Debrecen – said his goal was to launch a social media platform that supports free speech, from both the left and right, and is free from political censorship. “The social media giants have grown too big and there must be an alternative to them,” Pal told Magyar Nemzet, accusing the US tech company of deleting the accounts of thousands of Hungarians without reason.

While it’s unclear whether there is any government involvement in Hundub, its launch is proving handy for the prime minister’s ruling Fidesz party in its fight against the US tech giants. Judit Varga, the combative justice minister, regularly lashes out at Facebook and Twitter, accusing them of limiting right-wing, conservative and Christian views. Only last week, she consulted with the president of the Competition Authority and convened an extraordinary meeting of the Digital Freedom Committee to discuss possible responses to the “recent abuses by the tech giants”.


Polish Prime Minister Mateusz Morawiecki (L) and the chief editor of Gazeta Polska Tomasz Sakiewicz (R). Photo: EPA/EFE LAJOS SOOS

Future of Farcebooks

Unfortunately for the Polish and Hungarian governments and their supporters, rarely have such technology ventures succeeded.

Eline Chivot, a former senior policy analyst at the Center for Data Innovation, said government-backed ideas such as the recent “French Airbnb” are destined to fail from a lack of credibility, because they are based “on politically biased motives and a misguided application of industrial policy, [and] seek to dominate a market that is no longer up for grabs”.

Indeed, Albicla became the butt of jokes immediately upon its launch as users pointed out the numerous security and functionality flaws. Among them, some of the regulations of the new website were apparently copy-pasted from Facebook, as they included hyperlinks to Mark Zuckerberg’s site; more concerning, it was possible to download the entire database of users the day after launch.

Trolls immediately took advantage of the site’s shortcomings to ridicule it, with countless fake accounts set up for Pope John Paul II, Trump and PiS politicians. Despite it being set up as an “anti-censorship” space, many users have complained of being blocked for unclear reasons in the few days since launch.

“Albicla is an ad hoc initiative by the Polish supporters of Trumpism in direct reaction to the banning of Trump from social media platforms: it’s equivalent to right-wing radicals in the US moving to Parler and other such platforms,” Rafal Pankowski, head of the Warsaw-based “Never Again” anti-fascist organisation, told BIRN.

Pankowski points out there have been similar initiatives before, including stabs at creating a “Polish Facebook”, that were unsuccessful, though there exists a local alternative to YouTube, wRealu24, which the expert describes as “virulently anti-Semitic and homophobic” and whose popularity cannot be ignored.

Likewise, Hundub has been roundly mocked. Critics point out it is just a simplified version of Facebook that looks rather embarrassing in technological and layout terms. It has the same features as Facebook – you can meet friends, share content, upload photos and videos, and, as an extra feature, there is also a blog-format where you can publish your own stories uncensored. Even the buttons are similar to those Facebook uses.

Hvg.hu recalls that Hungarians actually had their own highly successful pre-Facebook called iWiW (an abbreviation of “International Who Is Who”), which was launched in 2002 and became the most popular website in Hungary between 2005 and 2010 with over 4.5 million registered users. Alas, competition from Facebook forced it to close in 2014.

It is unlikely that Hundub will be able to challenge Facebook’s dominance, but media expert Agnes Urban from Mérték Research said in an interview that Hundub could be used by Orban’s Fidesz party to rally supporters before the 2022 election and create an enthusiastic community of voters.

Founder Csaba Pal also explained that his aim is to create a social media platform for all Hungarians, meaning ‘Greater Hungary’ with its ethnic brethren in parts of Serbia, Romania, Ukraine and Slovakia.

Hungarian politicians, from left and right, are very active on Facebook and, to a lesser extent, on Twitter. Prime Minister Orban, initially wary of digital technology, now leads with over 1.1 million followers on Facebook and has even chosen to announce a number of policy measures during the pandemic on his page.

Justice Minister Varga and Foreign Minister Peter Szijjarto, notwithstanding their frequent outbursts, are both avid users of Facebook. It is not known whether any of their Facebook activity has been censored or banned; the business news site Portfolio recalls that the only political party to have been banned is the far-right Mi Hazánk party, whose leader, Laszlo Torockai, also had his account deleted. No doubt they will able to start afresh on Hundub.

Albicla also stands to benefit from its close connections to the Polish government, which since coming to power in 2015 has bolstered the pro-government media via mass advertising by state-controlled companies.

According to research conducted by Kantar this summer, the 16 state companies and institutions analysed by the consulting firm increased their advertising budgets to Gazeta Polska by 79 per cent between 2019 and 2020 – a period during which most media have lost advertising due to the pandemic. Gazeta Poska Codziennie, a daily affiliated with the same trust, has seen similar gains. And the foundation of Gazeta Polska editor-in-chief Tomasz Sakiewicz has also benefitted from state funds to the tune of millions of zloty.

By contrast, since PiS came to power, the media critical of the government, such as Gazeta Wyborcza, have seen their revenues from state advertising slashed.

In 2019, Gazeta Polska made international headlines when it distributed “LGBT-free zone” stickers with the magazine, in a period when PiS counsellors across Poland were starting to push for the passing of resolutions declaring towns “zones free of LGBT ideology”.

Despite the hiccups at launch, Albicla was immediately endorsed by high-level members of the government, including Piotr Glinski, the Minister of Culture and National Heritage, and Sebastian Kaleta, a secretary of state at the Ministry of Justice.

Kaleta is also the man in charge of a new draft law on the protection of freedom of speech online, announced in December by the Justice Ministry, which would prevent social media companies from being able to remove posts or block accounts unless the content is in breach of Polish law.

The International Network Against Cyber Hate (INACH), an Amsterdam-based foundation set up to combat discrimination online, has argued that “over-zealous” policing of harmful speech is not an issue in Poland and that the new Polish law might mean, for example, that online attacks against the LGBT community – which are not covered by national hate speech legislation – might go unpunished.

And where might those online attacks against the LGBT community be disseminated? Albicla, perhaps.

Turkey Investigates Facebook, WhatsApp Over New Privacy Agreement

Turkey’s Competition Board on Monday said an investigation had been launched into Facebook and WhatsApp over a new privacy agreement that forces WhatsApp user to share its data with Facebook. Users who reject the terms of the agreement will not be able to use WhatsApp after February 8.

The Turkish competition watchdog said the requirement allowing collection of that data should be suspended until the investigation is over.

“WhatsApp Inc and WhatsApp LLC companies will be known as Facebook after the new agreement and this will allow Facebook to collect more data. The board will investigate whether this violates Turkish competition law,” the board said.

The Turkish government is calling on its citizens to delete WhatsApp and to use domestic messaging app BiP instead, developed by Turkey’s mobile operator Turkcell, in addition to other secure messing apps such as Telegram and Signal.

Turkey’s presidency, ministries, state institutions and many other people have announced that they have deleted WhatsApp and downloaded other applications.

“Let’s stand against digital fascism together,” Ali Taha Koc, head of the Turkish Presidential Digital Transformation Office, said on Twitter on January 10, urging people to use the domestic BiP app.

BiP gained 1.12 million new users on Sunday alone after the new privacy agreement was introduced.

The new privacy agreement will not be in force in the EU and the UK because of its tight digital privacy law.

The EU fined Facebook 110 million euros earlier in 2017 euros for giving misleading statements on the company’s $19 billion acquisition of the internet messaging service WhatsApp in 2014.

Millions of people around the globe have abandoned WhatsApp and migrated to other messaging apps, Signal and Telegram in particular, and Signal and Telegram had server issues hosting such a large number of users.

Telegram and Signal which are accepted as the most secure messaging apps have become the most downloaded application in the past week for both Android and Apple phones users.

Turkey Fines Social Media Giants Second Time For Defying Law

Turkey’s Information and Communications Technologies Authority, BTK, on Friday imposed fines of 30 million Turkish lira, equal to 3.10 million euros, on digital media giants including Twitter, Facebook, Instagram, YouTube, Periscope and TikTok, following the first 10 million lira fine a month ago.

The second fine came after the social media giants again failed to appoint official representatives to the country as required by a new digital media law adopted in July this year.

“Another 30 days were given to those companies [to appoint representatives] and this time expired this week. Another 30 million Turkish lira fine was imposed on each of the companies which did not comply with the necessities of the law,” BTK told Turkey’s Anadolu Agency.

In the past month, none of the social media giants has made any attempt to appoint official representatives, as the Turkish government demanded. The only social media company to appoint a representative is Russia’s VKontakte digital platform, VK.

“We require social media companies to appoint representatives in our country. We aim to protect our citizens, particularly children, who are more vulnerable than adults,” President Recep Tayyip Erdogan said on December 1.

“We hope they voluntarily respond to our request. Otherwise, we will continue to protect the rights of our citizens at all times,” Erdogan added, accusing the social media giants of creating an uncontrolled environment in the name of freedoms.

If the media companies comply within three months, the fines will be reduced by 75 per cent. If not, they will face an advertising ban for three months. As final sanctions, their bandwidth will be halved and then cut by 90 per cent.

The government is also asking the online media giants to transfer their servers to Turkey.

Opposition parties and human rights groups see the new law as President Erdogan’s latest attempt to control media platforms and further silence his critics.

The new regulations might also prompt companies to quit the Turkish market, experts have warned. PayPal quit Turkey in 2016 because of similar requests and Wikipedia was blocked in Turkey for more than two-and-a-half years.

According to Twitter, Turkey has submitted the highest number of requests to Twitter to delete content and close accounts. Turkey asked Twitter to close nearly 9,000 accounts, but it only shut down 264 of them, in 2019.

Turkey Slaps €1m Fines on Twitter, Facebook, Instagram, YouTube

Turkey on Wednesday imposed ten million Turkish lira (one million euro) fines on digital media giants including Twitter, Facebook, Instagram, YouTube, Periscope and TikTok because they did not appoint official representatives in the country as required by a new digital media law adopted in July this year.

If appointed, the company’s representatives would have to remove any piece of content that the Turkish authorities consider illegal within 48 hours of an official request.

“As the deadline for social media companies… for informing the government about their representatives is over, ten million lira fines are imposed,” Deputy Transport Minister Omer Fatih Sayan said on Twitter.

Sayan called on the companies to appoint their representatives in Turkey immediately.

“Otherwise, other steps will be taken,” he warned.

According to the new digital media law, the online media giants now have 30 days to appoint their representatives. If they do not, 30 million lira (three million euro) fines will be imposed.

If they still do not comply within three months, they will face an advertisement ban for three months.

As final sanctions, their bandwidth will be halved and then cut by 90 per cent.

The government is also asking the online media giants to transfer their servers to Turkey.

So far, none of the major companies have complied.

Opposition parties and human rights groups see the new law as Turkish President Recep Tayyip Erdogan’s attempt to control media platforms and silence his critics.

The new regulations might result in these companies quitting the Turkish market, experts have warned.

PayPal quit the Turkish market in 2016 because of similar requests and Wikipedia was blocked in Turkey for more than two-and-a-half years.

Turkey has submitted the highest number of requests to Twitter to delete content and close accounts, the company has said.

According to Twitter, Turkey asked it to close nearly 9,000 accounts, but it only shut down 264 of them.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now