Report: Turkey Remains World Beater in Twitter Censorship

Social media giant Twitter’s transparency report for the first six months of 2020 said Turkey continued to lead the world in terms of Twitter censorship in many categories, including the highest number of third-party takedown requests, court orders and accounts and tweets withheld.

Turkey had the highest number of combined requests including court orders and other legal demands, with 45,776 requests. It was followed by Japan and Russia, which made 45,776 and 30,436 requests respectively.

Turkey also at the top of the list when it comes to the number of court orders it sent to Twitter. It sent 6,513 such requests in the first half of 2020. Russia followed far behind with 2,972.

In other legal demands categories, meaning non-court order requests, Turkey again topped the list with 39,263 requests made in the first half of 2020, followed by Japan, which made 38,814 requests, followed in third place by Russia, which submitted 27,464 such requests.

Turkey also sent 347 information requests to Twitter, which did not comply any of them.

Turkey remains in number one place for the total number of accounts specified for closure/action under court orders and other legal demands. It specified 99,840 accounts for closure or other actions, followed by Indonesia, which sought action on 74,660 accounts. Japan came third, with 47,472 accounts.

In terms of accounts withheld by Twitter, Turkey again had the highest number globally with 2,501 withheld accounts, followed by Russia with 340 and India with 238.

In terms of tweets withheld by Twitter, Turkey was also number one globally, responsible for 12,135 of the total of 28,542 tweets withheld in that period. Some 42 per cent of all tweets withheld globally were from Turkey.

According to the Twitter report, 58 accounts of verified journalists and news outlets from around the world were subject to 333 legal demands in the period in consideration. Most of these legal demands originated either from India, 149, or from Turkey, 142 – together making them responsible for 291 of the 333 legal demands.

While Turkey leads in terms of Twitter censorship, and made the highest number of requests in several categories, it now aims to expand its control over social media companies with a new digital law.

Social media companies were already fined twice by Turkey for defying a new law on appointing official representatives to the country.

Experts fear that if it does appoint an official representative to Turkey, as demanded, Twitter will have to respond more often to official demands.

“The removal of content rate [based on Turkey’s requests] was [only] 0.33 per cent for the first six months of 2020. Turkey wants Twitter to come to the country [in terms of a representative] for this reason,” Yaman Akdeniz, a Turkish digital rights activist wrote on Twitter on Wednesday.

“Coming to Turkey will result with Twitter becoming complicit in rights violations and would be against the current approach and policy adopted by Twitter regarding demands from Turkey,” Akdeniz added.

So far, only YouTube and Russia’s VK social media platform have appointed legal representatives to Turkey. Facebook, which also owns Instagram and WhatsApp, has said it will not appoint a representative while Twitter is still undecided on the matter.

Romania Violated Journalist’s Freedom of Expression, Says European Court

The European Court of Human Rights on Tuesday condemned Romania for failing to uphold freedom of expression in the case of a local journalist who was fined by a domestic court over a series of critical articles about another journalist in the north-eastern county of Bacau.

“The case concerned the domestic authorities’ decision to order the applicant, a journalist, to pay damages for having published five blog posts criticising L.B., another journalist who was the editor-in chief of a newspaper in the Desteptarea media group and producer for a local television channel belonging to the same group,” the ECHR said in a statement.

The posts were published by Gheorghe-Florin Popescu in 2011. The same year, L.B. brought civil proceedings before a court in Bacau, which ruled that two of the articles posted by Popescu lacked any factual basis when describing L.B. as morally responsible for a murder-suicide.

The Bacau court also established that Popescu had used vulgar and defamatory expressions that affected L.B.’s honour and reputation, and ordered him to compensate L.B. with the equivalent of 1,100 euros.

Popescu subsequently appealed against the verdict, but Romania’s Court of Appeal dismissed his appeal as unfounded in 2013. The journalist then took his case to the ECHR.

More than seven years later, judges at the ECHR unanimously reached the conclusion that Romania breached Article 10 of the European Convention of Human Rights, which protects freedom of expression.

According to Tuesday’s verdict, Romanian courts failed to make “a distinction between statements of facts and value judgments” when examining Popescu’s criticism of L.B.

The verdict also said that Romanian courts ignored “the fact that the applicant was a journalist and that the freedom of the press fulfilled a fundamental function in a democratic society”.

They also ignored the fact that L.B. was a publicly known figure before the controversy involving Popescu, the verdict added.

The Romanian courts ruled that some of the content of Popescu’s articles was offensive, but the ECHR concluded that “although the satirical nature of the articles had been the main argument in the applicant’s defence, the domestic courts had failed to investigate with sufficient care whether or not this was a form of exaggeration or distortion of reality, naturally aimed to provoke”.

“In the court’s view, the style was part of the form of expression and was protected as such under Article 10, in the same way as the content of the statements,” the ECHR said.

Turkey Investigates Facebook, WhatsApp Over New Privacy Agreement

Turkey’s Competition Board on Monday said an investigation had been launched into Facebook and WhatsApp over a new privacy agreement that forces WhatsApp user to share its data with Facebook. Users who reject the terms of the agreement will not be able to use WhatsApp after February 8.

The Turkish competition watchdog said the requirement allowing collection of that data should be suspended until the investigation is over.

“WhatsApp Inc and WhatsApp LLC companies will be known as Facebook after the new agreement and this will allow Facebook to collect more data. The board will investigate whether this violates Turkish competition law,” the board said.

The Turkish government is calling on its citizens to delete WhatsApp and to use domestic messaging app BiP instead, developed by Turkey’s mobile operator Turkcell, in addition to other secure messing apps such as Telegram and Signal.

Turkey’s presidency, ministries, state institutions and many other people have announced that they have deleted WhatsApp and downloaded other applications.

“Let’s stand against digital fascism together,” Ali Taha Koc, head of the Turkish Presidential Digital Transformation Office, said on Twitter on January 10, urging people to use the domestic BiP app.

BiP gained 1.12 million new users on Sunday alone after the new privacy agreement was introduced.

The new privacy agreement will not be in force in the EU and the UK because of its tight digital privacy law.

The EU fined Facebook 110 million euros earlier in 2017 euros for giving misleading statements on the company’s $19 billion acquisition of the internet messaging service WhatsApp in 2014.

Millions of people around the globe have abandoned WhatsApp and migrated to other messaging apps, Signal and Telegram in particular, and Signal and Telegram had server issues hosting such a large number of users.

Telegram and Signal which are accepted as the most secure messaging apps have become the most downloaded application in the past week for both Android and Apple phones users.

Online Media in Balkans ‘Need Regulation, Not Censorship’

Experts told an online debate hosted by the Balkan Investigative Reporting Network on Tuesday that the current regulation systems for online media in the Western Balkans are not good enough, but efforts to curb the publication of hate speech and defamatory comments must not tip over into censorship.

Media and legal experts from Albania, Bosnia and Herzegovina, Montenegro, North Macedonia and Serbia who spoke at the debate entitled ‘Case Law and Online Media Regulation in the Balkans’ also said that the application of existing legislation is inadequate.

Authorities often rely on legislation that was developed for traditional media which has not been adapted accordingly, or on self-regulation which is not mandatory.

Lazar Sandev, an attorney at law from North Macedonia argued that “those who create public opinion regarding matters of public interest do not uphold any standards, they do not have any legal knowledge”.

Jelena Kleut, associate professor at the University of Novi Sad’s Faculty of Philosophy, said that in Serbia there is lack of willingness to apply standards in online media, and noted a difference between rich and poor media outlets as well as responsible and not responsible ones.

“The wealthy, irresponsible media – they have legal knowledge but they don’t care. They would rather see the complaints in court, pay a certain amount of fines and continue along, they don’t care. On the other end of the spectrum, we have responsible but poor media,” Kleut said.

The media experts also debated the controversial issue of reader comment sections on websites, which some sites around the world have removed in recent years because of a proliferation of hate speech, defamation and insulting language.

According to Montenegro’s Media Law, which came in force in August this year, the founder of an online publication is obliged to remove a comment “that is obviously illegal content” without delay, and no later than 60 minutes from learning or receiving a report that a comment is illegal.

Milan Radovic, programme director of the Civil Alliance NGO and a member of the Montenegrin Public Broadcaster’s governing council, argued that this “it is clear that in such a short period of time, if it is applied, will damage those affected, but also damages for freedom of expression”.

Edina Harbinja, a senior lecturer at Britain’s Aston University, warned that there is a conflict between regulatory attempts and media freedom, and that “this is when we need to be careful in how we regulate, not to result in censorship”.

This was the second debate in a series of discussions on online media regulation with various stakeholders, organised as a part of the regional Media for All project, which aim to support independent media outlets in the Western Balkans to become more audience-oriented and financially sustainable.

The project is funded by the UK government and delivered by a consortium led by the British Council in partnership with BIRN, the Thomson Foundation and the International NGO Training and Research Centre, INTRAC.

Share This Now! How Conspiracy Theories Swamped North Macedonia

The day starts with coffee and unread messages: a few from friends, a few work related, a paid furniture ad, and one with lots of exclamation marks that indicates that it must be read immediately before it is deleted from the Internet. This is because it reveals a big secret, hidden from ordinary people.

That “secret” may refer to the “fake” pandemic, the “dangerous” new vaccine, the “global conspiracy against Donald Trump”, the “dark truth about child-eating elites” –  an especially a popular term – and so on.

The sender or sharer may well be an ordinary person that we know personally or through social networks, and who sends such content for the first time or occasionally.

Spreading misinformation through personal messages has become increasingly common in North Macedonia, as elsewhere.

But this is not the only novelty. As the fight against fake news has intensified, with changes of algorithms on social networks and the inclusion of independent fact-checkers, so have the techniques that allow false content to remain undetected on social networks for as long as possible.

“Sending personal messages is an attempt to spread misinformation faster, before it can be detected,” explains Rosana Aleksoska, from, Fighting Fake News Narratives, F2N2, a project led by the well-known Skopje based NGO MOST, which searches for misinformation on the Internet.

Among the newer methods used to avoid detection, she notes, is the mass sharing of print screens instead of whole texts, and, in countries that use Cyrillic script like North Macedonia, Cyrillic and Latin letters are deliberately mixed.


Spreaders of misinformation are always in search of new ways to avoid detection. Illustration photo: BIRN

See and share before it’s removed

One video that recently went viral on social networks in North Macedonia, fuelling panic about COVID vaccines, was released on December 8.

In it, a former journalist appears to interpret a document outlining possible contra-indications in and side-effects from the newly developed Pfizer vaccine against COVID-19 – but presents them as established facts.

It got more than 270,000 views and 5,300 shares on Facebook.

While the video reached a large audience, those numbers only partly show just how far the misinformation spread.

The video soon found itself in the inboxes of many other people, after Facebook acquaintances sent it to them in a direct message, urging them to see it as soon as possible, before it was deleted or marked as fake.

People who believe in conspiracy theories, or regularly participate in disseminating them, send direct messages to each other, informing them that new material has been released.

At a first glance, one might think it sounds like a small obscure group, hanging out online.

But the results of a recent public opinion poll conducted by the Balkans in Europe Policy Advisory Group, BiEPAG, showed that only 7 per cent of the population in the region do not believe any of the best-known conspiracy theories, and over 50 per cent believe in all of them. The combined percentage of all those who said they believed in all or just in some of the theories was over 80 per cent.

With these huge numbers, it is not surprising that more misinformation also ends up in the virtual mailboxes of those who “don’t believe”, persuading them to switch sides. Some of these people receive three or four such messages a week.

What the messages have in common is that they are accompanied by urgent words: “See this before they delete it from Facebook”, or, “Share and disseminate”, or “They could no longer remain silent, take a look”, etc.

Because people pay more attention to personal messages than to other social media posts, they are more likely to see this content. They may well also spread them, explains Bojan Kordalov, a Skopje-based expert on social networks and new media.

“The way they are set up and designed, fake news gives people a strong incentive to spread them,” he said.

The pandemic was the main topic of misinformation this year, but in North Macedonia this topic intertwines with others, ranging from Euro-Atlantic integration to politics, Aleksoska from F2N2 observes.

“The object of the attack is people’s emotions – to provoke an intense reaction,” she says.

As the year went on, the subject of messages also changed. At first they focused on the “false” nature of the virus, and then later on how there was no need to wear masks or observe social distancing and other health-protection measures.

After the breakthrough in discovering a vaccine was made, the messages began to focus on the alleged dangers and health risks of vaccination.


The way they are set up and designed, fake news gives people a strong incentive to spread them. Illustration photo: BIRN

“Don’t believe, check” – as we instruct you

The video about the supposed effects of the vaccine that gained traction in North Macedonia is a typical example of what typical disinformation looks like. Similar videos are produced every day.

Among the private messages received by social networks users are videos of people posing as doctors from the US, Canada, Belgium, Britain or Germany, filming themselves with webcams, warning that vaccines may well be deadly.

In one video, which focuses on reading the instructions on the Astra Zeneca vaccine, it is also clear that the creators of fake news use the same messages as those who fight fake news, such as: “Don’t believe, check”.

However, they also provide the guidelines about what to “check”.

“Don’t trust us, investigate for yourself. For example, visit these sites. Or google this term, ChAdOx-1. See here, it says – micro cloning,” the narrator in this video can be heard saying as the inscriptions from the vaccine packaging are displayed.

“They convince us that it is safe, but the traces are here in front of us,” the narrator adds, in a dramatic tone.


The pandemic was the main topic of misinformation this year. Illustration photo: BIRN

Finding new ways to bypass filters

Although outsiders have no direct insight into exactly how social networking algorithms detect suspicious content, as they are business secrets, many experts on these technologies told BIRN that certain assumptions can be drawn.

As the creators of disinformation can also be technologically savvy,  they have likely drawn their own conclusions and seek new ways to bypass known filters.

One common alarm is when content goes viral quickly. This signals to social networks that the content needs to be checked. But if several different messages containing the same main point are sent, instead of one identical message, the protection algorithms may have a harder time detecting the content’s risk.

Apart from masking the content, spreaders of misinformation use different formats to avoid detection.

Print screens of articles and of social media posts may be shared instead of the actual articles or posts. Some users even do this with their own posts, and republish them as photos.

“Print screens are common in conducting disinformation campaigns. This is just one of the mechanisms they use,” Aleksoska explains. “The problem is much bigger, so the answer must be comprehensive and coordinated.”

Print screens are not only more difficult for the software to detect, but make it harder for people to check, especially if the name of the media outlet that published the content is omitted or cut from the photo.

The part of the internet in North Macedonia recently saw a print screen from a Swiss media outlet circulating with the title in German reading: “Currently no vaccine can be approved.” Hundreds of people shared it.

The publisher that first spread this print screen claimed that the Swiss had rejected the German vaccine “because of the risk of death”.

But the real text does not say at all that Switzerland rejected the German vaccine but only that it will first implement a risk control strategy “to prevent side effects or fatalities”.

This way, those who spread fake news have a clear advantage over those who fight to stop it.

In order to reach the original article, one has to first rewrite the title in German in a search engine, find the text with an identical title among the results and translate it with an online tool. While doing this, ten people will have since received this print screen and will just click “Share”.

Print screens in North Macedonia have also recently been used to spread untrue information about the current dispute between North Macedonia and its neighbour, Bulgaria, which has refused to allow Skopje to start EU accession talks.

Some of these posts present Bulgaria’s demands as something that North Macedonia already accepted.

Since the main bone of contention is the Macedonian language and identity, it is one of the most sensitive issues currently preoccupying the public.

Another technique used to avoid or baffle filters is mixing Cyrillic and Latin letters that are identical in meaning or form, like the letters a, e, n, x, u, j, s, as well as some others.

When a social media user complains that a post has been removed from their profile, in some cases, another user will advise them next time to mix up the letters, making it harder to detect problematic content.


Some people spread fake news because they believe in it and think they are contributing. Photo: Pixabay

Ideological foot-soldiers do the hard work

But why would anyone advise others on how to make it harder to for social networks to detect their problematic content.

Checking some of the profiles that publish and spread misinformation reveals that, besides the usual suspicious suspects – like thematic profiles with false names that only publish information from one or more sources, or people who are part of formal or informal organizations and spread their ideology – a large number of users have no known connection to disinformation networks.

Most are ordinary people who do not hide their identities, publish photos of family trips, but also from time to time share some “undiscovered truth” about the coronavirus or a “child abuse plot” – wedged between lunch recipes and pictures of walks in parks.

Fact-checkers and communication technology experts agree that disseminating misinformation is a highly organised activity, often done with a malicious intent – but also that many people share such content without hidden motives. They clearly feel a responsibility to be “on the right side”.

“Some people spread fake news because they believe in it and think that by doing so they are contributing to some kind of fight for the truth to come to light,” Kordalov explains.

This makes the fight against misinformation even more difficult, because while organised networks create and spread false news at the top, most of the work of dissemination is done by individuals and micro-communities that have no connection to them, or even between each other.

“All conspiracy theories are just pieces of the master theory that says that certain elites rule the world. The more somebody believes in that, the more likely he or she would read and share content supporting this theory,” Aleksoska notes.

However, there are some solutions. Algorithms, according to Kordalov, can be reprogrammed to recognise new forms of false news. No final answer can be found to misinformation, he admits, but the two sides constantly compete and the side that invests most effort and resources will lead in the end.

Technological competition, however, is not enough if it is not matched by stronger institutional action, because creating mistrust in institutions is one of the main goals of disinformation campaigns.

Kordalov says it is not enough for the PR services of institutions just to issue announcements rebutting fake news related to their work each time they spot it. They must be actively involved in a two-way communication and react to false news quickly.

“This is often called ‘damage control’ but this is not the point. Their [institutions’] job is to serve the citizens, and providing real information is part of that service,” he says.

One way for institutions to protect public trust in them is to provide high quality services, he adds. If they work well, and if citizens feel satisfied with them, it will be harder for disinformation to hurt them.

Montenegrin Govt Urged to Commit to Press Freedom Reforms

A group of media organisations has called on the new Montenegrin government to commit to reforms that will build and maintain media freedom.

Media Freedom Rapid Response, MFRR, the Southeast Europe Media Organization, SEEMO, and their partners published a report on Wednesday, demnanding protection of media freedom in Montenegro.

“It will take sustained and concerted efforts by Prime Minister Zdravko Krivokapic to improve protections for media freedom and the rule of law. They must devote particular attention to addressing the myriad problems faced by journalists and media workers in Montenegro,” said iNik Williams, coordinator at the European Centre for Press and Media Freedom, ECPMF, in a press release.

In parliamentary elections held on August 30, three opposition blocs won a slender majority of 41 of the 81 seats in parliament, ousting the long-ruling Democratic Party of Socialists, DPS. After the election, on December 4, new Prime Minister Krivokapic, among other things, promised his government would restore and protect media freedom.

In the report, MFRR called for an end to impunity for crimes against journalists and media workers by ensuring police and prosecutors investigate all attacks and threats and bring perpetrators to justice.

It also called for establishing shared standards and principles for the regulation of the media market to encourage a fair playing field.

The report warned about the current ownership concentration of much of the media, saying management of state support funds and public advertising had been paired with a ruthless campaign against independent media.

The media organisations pointed to the prison sentence issued to the well-known investigative journalist Jovo Martinovic, calling it an attack on journalism. In a second-instance verdict, the court found him guilty of mediation in drug trafficking; he insisted he only met criminals for the purpose of his investigation.

“The new government should continue reform of the public broadcaster. It should start reforming journalistic source protection and, generally, ensure that all new or amended media laws are drafted in line with international standards and best practice on media freedom and pluralism,” the report said.

In its 2020 progress report on the candidate country, the European Commission noted a lack of media freedom in Montenegro, stressing that important old cases of attacks on journalists remained unresolved. The Commission warned also of the polarization of the media scene and of weak self-regulatory mechanisms.

“Concerns also remain about national public broadcaster RTCG’s editorial independence and professional standards,” the progress report said.

Turkey Fines Social Media Giants Second Time For Defying Law

Turkey’s Information and Communications Technologies Authority, BTK, on Friday imposed fines of 30 million Turkish lira, equal to 3.10 million euros, on digital media giants including Twitter, Facebook, Instagram, YouTube, Periscope and TikTok, following the first 10 million lira fine a month ago.

The second fine came after the social media giants again failed to appoint official representatives to the country as required by a new digital media law adopted in July this year.

“Another 30 days were given to those companies [to appoint representatives] and this time expired this week. Another 30 million Turkish lira fine was imposed on each of the companies which did not comply with the necessities of the law,” BTK told Turkey’s Anadolu Agency.

In the past month, none of the social media giants has made any attempt to appoint official representatives, as the Turkish government demanded. The only social media company to appoint a representative is Russia’s VKontakte digital platform, VK.

“We require social media companies to appoint representatives in our country. We aim to protect our citizens, particularly children, who are more vulnerable than adults,” President Recep Tayyip Erdogan said on December 1.

“We hope they voluntarily respond to our request. Otherwise, we will continue to protect the rights of our citizens at all times,” Erdogan added, accusing the social media giants of creating an uncontrolled environment in the name of freedoms.

If the media companies comply within three months, the fines will be reduced by 75 per cent. If not, they will face an advertising ban for three months. As final sanctions, their bandwidth will be halved and then cut by 90 per cent.

The government is also asking the online media giants to transfer their servers to Turkey.

Opposition parties and human rights groups see the new law as President Erdogan’s latest attempt to control media platforms and further silence his critics.

The new regulations might also prompt companies to quit the Turkish market, experts have warned. PayPal quit Turkey in 2016 because of similar requests and Wikipedia was blocked in Turkey for more than two-and-a-half years.

According to Twitter, Turkey has submitted the highest number of requests to Twitter to delete content and close accounts. Turkey asked Twitter to close nearly 9,000 accounts, but it only shut down 264 of them, in 2019.

Bucharest Wins Race to Host EU Cybersecurity Centre

The Romanian capital has won the race to host the new European Cybersecurity Industrial, Technology and Research Competence Centre, ECCC, Romania’s Foreign Minister, Bogdan Aurescu, announced on Thursday.

“Exceptional success for Romania,” Aurescu wrote. “After intense diplomatic efforts, Bucharest was elected to host the EU’s Cybersecurity Centre – the 1st EU Agency in Romania,” the minister tweeted.

Bucharest was chosen over Brussels, Munich, Warsaw, Vilnius, Luxembourg and León, Spain, to host this new centre funded by the EU and dedicated to developing technologies to counter cyberattacks.

“Romanian expertise in IT was acknowledged in the EU. Romania is ready to work hard for a European cybersecurity ecosystem,” the minister continued in his tweet.

According to the European Council, the criteria to choose the host of the ECCC included “the date on which the centre can become operational”, “connectivity, security and interoperability with IT facilities to handle EU funding” and the existence of a “cybersecurity ecosystem”.

In recent years, Romania has become respected for its cybersecurity capacities. Conversely, it is also infamous for being the base of many cybercrime networks defrauding internet users all over the world.

The ECCC aims to “contribute to the deployment of the latest cybersecurity technology, support cybersecurity start-ups and SMEs, enhance cybersecurity research and innovation [and] contribute to closing the cybersecurity skills gap”.

The centre is expected to play a central role in the EU fight against increasing cyberthreats from hackers acting either on their own initiative or at the behest of hostile states and entities.

On such threat was reported this week by the European Medicines Agency. It said it had been hit by a cyberattack in which hackers accessed documents relating to a COVID-19 vaccine.

Slovenia Govt Condemned for Cutting Funds to Public News Agency

The Slovene Journalists’ Association, DNS, said on Tuesday it was “appalled” by the government decision to stop funding the Slovenian Press Agency, STA, the independent public news agency, allegedly because it did not file the requested documentation to the Government Communications Office, UKOM.

“This is yet another attempt to destroy the national press agency, which is a pillar of high-quality and unbiased reporting. We have seen the same thing happen in neighbouring Hungary,” the DNS said in a press release.

As local media – but not the government press release – reported on Tuesday, the UKOM informed the government of Janez Jansa that it was unable to implement a contract with the STA for the rest of this year or conclude a contact for next year.

The STA as a result has not received any monthly compensation for October from UKOM, which its leadership says threatens its future, and is a serious threat to media pluralism and media freedom.

BIRN asked UKOM to respond to these accusations but did not receive a reply by the time of publication.

However, UKOM director Uros Urbanija told public television on Tuesday that there had been “no decision to stop funding the STA”, and that UKOM had only informed the government about the impossibility of financing the agency because it “did not get the information we need to be able to verify the credibility and sustainability of [STA’s] funding”.

The pensioners’ party, DeSUS, a partner in the coalition government led by Prime Minister Jansa, on Twitter on Tuesday “demand the immediate withdrawal of the decision, by which, without prior and reasoned discussion, the government is strongly interfering in the media space”.

The DNS claims that, as the founder of the agency, the Republic of Slovenia is required by law to finance the agency.

The STA said on Tuesday that the UKOM decision was preceded by a series of letters addressed to the director and Supervisory Board since mid-October, and was responding to it “in a manner and within the scope envisaged by the legislation”, but that some questions submitted by the UKOM have no legal basis.

“In the letters, UKOM demanded a series of explanations: from content-related questions about the journalistic work of the STA editorial board and specific news content and responses to that content which run against the editorial autonomy provided by law; to issues related to business operations, which are, in accordance with the ZSTAgen [Slovenian Press Agency Act], supervised by the Supervisory Board of the STA,” the agency said.

Some observers suggest that Jansa’s right-wing government is dissatisfied with the STA’s reporting during the pandemic, as it gave more space to anti-government protests than to government and prime ministerial appearances.

But the DNS defended its work. With its news wire, live streamed press conferences and radio news service, “the STA has made it significantly easier for journalists to access information at a time when they have had to work remotely, due to various restrictions”, the DNS said.

A number of local and international press freedom watchdog organizations have accused Jansa of using the pandemic to restrict media freedoms and make often personal attacks on journalists.

EU Set to Take on Big Tech with New Digital Services Act

Over the past two decades, the process of digitisation has completely transformed the European services sector, though EU legislation regulating the provision of those services has not kept up with the fast-changing technological environment. With consensus among European policymakers that the 20-year-old piece of legislation, the e-Commerce Directive, was in dire need of updating, the European Commission announced in January 2020 that it would pass a new Digital Services Act by the end of 2020. That date, expected to be December 2, is rapidly approaching.

With this brand new set of regulations governing the EU’s digital market, the Commission intends to clarify and introduce new digital services liability rules and ensure a more competitive digital market where even small and medium-sized businesses (SMEs) can compete with the more established players.

Policymakers in the EU, which is already home to the world’s strictest data privacy laws, believe that Europe is in a unique position to set new standards for the regulation of digital services for the whole world. The forthcoming rules represent an unprecedented strike against the seemingly limitless power of big tech, which are likely to oppose the reforms.

A close-up image shows the slogan of the ‘StopHateForProfit’ campaign on the organization’s website displayed on a smartphone screen in Cologne, Germany, 29 June 2020. EPA-EFE/SASCHA STEINBACH

What new rules are coming?  

Although the final contours of the legislative package are not yet public knowledge, it is expected that the regulation will come in two legislative proposals. The first set of proposals contained in the Digital Services Act will likely focus on updating digital services providers’ responsibilities and liabilities. The Digital Markets Act will then likely be concerned with limiting the power of big platforms in general.

In a recent speech, Executive Vice-President of the Commission Margrethe Vestager said that digital media platforms need to be more transparent about the way they share the digital world that we see.

“They’ll have to report on what they’ve done to take down illegal material. They’ll have to tell us how they decide what information and products to recommend to us, and which ones to hide – and give us the ability to influence those decisions, instead of simply having them made for us. And they’ll have to tell us who’s paying for the ads that we see, and why we’ve been targeted by a certain ad,” Vestager said earlier this year.

Although it is not year clear which specific platforms will be targeted, it is widely expected that the new rules with mainly apply to social media platforms with more than 2 million users, which have, until now, bitterly resisted attempts to disclose their algorithms.

“Platforms need to ensure that their users can be protected from illegal goods and content online, by putting in place the right processes to react swiftly to illegal activities, and to cooperate with law enforcement authorities more effectively,” the Commission’s press officer for the digital economy, Charles Manoury, told BIRN an email.

When asked about the concrete rules being considered in Brussels, Manoury said that the Commission will “aim to harmonise a clear set of obligations (responsibilities) for online platforms, including notice-and-action procedures, redress, transparency and accountability measures, and cooperation obligations.”

In a report produced by the European Parliamentary Research Service in October, EU experts came up with the following recommendations for the Commission:

  1. Introduce a clear, standardised notice-and-action procedures to deal with illegal and harmful content;
  2. Enhanced transparency on content curation and reporting obligations for platforms;
  3. Out-of-court dispute settlement on content management, particularly on notice-and-action procedures.

Those policy recommendations are strikingly similar to the rules already in effect in the country currently holding the Presidency of the Council of the EU – Germany.

A Google logo is displayed at the Google offices in Berlin, Germany, 24 June 2019. EPA-EFE/HAYOUNG JEON

German lessons

 “The Commission in its impact assessments takes into account already existing EU laws, such as the NetzDG,” noted the Commission’s spokesman Manoury, referring to the Network Enforcement Act, which was passed by the German parliament back in 2017.

According to the website of the German Ministry of Justice and and Consumer Protection, the law aims to fight hate crime and criminally punish fake news and other unlawful content on social networks more effectively. This includes insults, malicious gossip, defamation, public incitement to crime, incitement to hatred, disseminating portrayals of violence and threatening the commission of a felony.

In practice, all social media platforms (with more than 2 million users) that are accessible in Germany are obliged to take down or block access to “manifestly unlawful content” within 24 hours of receiving a complaint. They also have to offer their users an accessible procedure for reporting criminally punishable content and take “immediate notice” of any content that might violate German criminal law.

But German lawmakers didn’t stop there. In June this year, the Budestag decided to tighten further the laws against hate speech online by requiring social networks to report to the BKA (Federal Police) and transmit some user data, such as IP addresses or port numbers, directly to the authorities.

Moreover, new rules will oblige operators of social networks to submit biannual reports on their handling of complaints about criminally punishable content. These reports must contain information, for example, on the volume of complaints and the decision-making practices of the network, as well as about the teams responsible for processing reported content. They must be made available to everybody on the internet.

Social media platforms could be liable for fines of up to 50 million euros if they fail on their reporting duties, according to a statement from the Justice Ministry.

According to the German daily Stuttgarter Zeitung, so far nine social media platforms have offered transparency reports: Facebook, Instagram, Twitter, YouTube, Reddit, Tiktok, Soundcloud, Change.org and Google+. The number of complaints varies greatly. In the second half of 2019, 4,274 unsatisfied users reported to Facebook. There were 843,527 complaints on Twitter and 277,478 on YouTube. Facebook felt compelled to take action in almost a quarter of the cases. 87 per cent of these posts were deleted within 24 hours, a total of 488. Twitter took care of 16 per cent of the complaints, 86 per cent of which were removed from the network within a day, according to the German newspaper.

However, the new obligations have their critics. Some express concern that legal content will end up being deleted by overzealous platforms eager to avoid paying hefty fines, the so-called problem of “over-blocking”. In 2017, when the law was first passed by the German parliament, even journalism unions in Germany protested against it, fearing a new form of censorship.

Reacting to the criticisms, German Justice Minister Christine Lambrecht recently called for the introduction of a “counter-presentation procedure”, which would give authors of deleted content the right to ask social networks for a reassessment of their decision before any fines would be imposed.

There is also criticism that some of the proposed rules might even be in conflict with the German constitution. This particularly concerns the law intended to combat far-right extremism and hate crime, which was passed in the summer and is intended to force operators of social networks to report criminal content such as the threat of dangerous bodily harm or defamation of public figures (mayors or municipal councillors) to the Federal Criminal Police Office. It is because of those concerns that the president has not yet signed the law.

Long way to go

The German experience clearly shows that certain measures to combat the spread of hate speech and other form of illegal content online are relatively easy to implement, while others, like direct reporting to the police, might take much longer to build a consensus around.

That being said, even when it comes to the seemingly more trivial measures, the European Commission’s mission is an infinitely more challenging one. First of all, it needs to make all member states agree on what even constitutes a hate crime on the internet. Then it has to create a set of rules that would be applicable across all member states.

According to a source in the European Commission familiar with the legislation, the first task is the easier one. “There is actually a very broad agreement across the EU on the question of illegal content. Basically, what is illegal offline is also illegal online – it is just a question of how you monitor it and what measures to take to make sure that the rules are followed also online,” the source, who wished to remain anonymous, told BIRN.

Whatever the rules that the Commission ends up proposing in early December, the speed of the final implementation of those measures will largely depend on the legal form of the rules.

Generally speaking, if the rules assume the form of EU regulations, the final implementation might take a very long time, as regulations need unilateral agreement by all member states. If EU legislators decide to go with directives, which leave a lot of space for individual member states to translate into their own respective national laws and don’t require unilateral agreement, things could go much faster.

According to the source from the Commission, half a year is an absolute minimum to expect the legislative process to take.

“If you have an extremely well-drafted piece of legislation that everyone agrees on, it can take half a year. I’ve never heard about anything going faster than this. It is already clear that this will not be very straightforward,” the source said.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now