Internet Freedoms in Turkey Continue to Deteriorate: Report

The Media and Law Studies Association, MLSA said in a report published on Friday that internet freedoms continued to decline in 2021 due to increasing censorship and surveillance.

The Free Web Turkey 2021 Annual Report funded by the Netherlands shows that at least 11,050 URLs were blocked in Turkey in 2021.

“While 1,593 of the blocked URLs contained news articles, a total of 49 news websites were banned during the monitoring period, some even more than once,” the report said.

“The project’s findings bring to light that 53 per cent of blocked news articles pertain to information directly related to Turkish President and AKP leader Recep Tayyip Erdogan, his family, and to mayors or officials of the AKP,” the report added.

According to Mumtaz Murat Kok, project and communications coordinator at the MLSA, the situation is not getting any better in 2022.

“The results of this report, which covers a period of only one year and reveals the dimensions of digital censorship in Turkey, becomes much more frightening when considered together with the ‘disinformation bill’ submitted to parliament very recently,” Kok told BIRN.

The new disinformation bill currently waiting to be considered by parliament makes ‘disinformation’ a crime that can lead to a jail term and paves the way for an even more repressive and coercive media environment, he said.

“As the report reveals, there is the intent to strengthen censorship practices that currently aim to protect a certain group and further violate the public’s right to receive information. In a country where there is almost absolute uniformity in media ownership, social media – on which many journalists rely on to report and many citizens rely on for news – is also being stifled,” Kok said.

The report also recommends that awareness of censorship and surveillance should be increased and social media platforms should bear the responsibility of being ‘media’.

According to a report published by Google covering data from the first six months of 2021, Turkey requested the removal of a total of 4,776 items. The majority of the requests were made on the grounds of ‘defamation’.

Google removed 1,686 of these items for legal reasons and 219 for company policy reasons.

“Considering the censorship practices that the government aims to increase, and that social media companies have so far submitted to the demands of the government without resistance, I think the next report [by the MLSA for 2022] will not be a more pleasant report,” Kok said.

Turkey’s Planned Internet Law to Criminalise ‘Spreading Misinformation’

A new set of laws, which includes 40 articles, was represented to Turkey’s parliament on Thursday, aiming to increase government control over the internet, media and social media.

The law was prepared by President Recep Tayyip Erdogan’s ruling Justice and Development Party, AKP and its far-right partner, the Nationalist Movement Party, MHP.

The new law, which is expected to pass soon, for the first time defines the crime of “spreading misinformation on purpose”.

It criminalises “a person who publicly disseminates false information regarding internal and external security, public order and the general health of the country, in a way that is suitable for disturbing the public peace, simply for the purpose of creating anxiety, fear or panic among the people”, the draft law explains.

According to the proposed law, persons who spread misinformation may be jailed for one to three years. If the court decides that the person spread misinformation as part of an illegal organisation, the jail sentence will be increased by 50 per cent.

Journalists may also be charged under this new law if they use anonymous sources for hiding the identity of the person who spreads the misinformation.

The draft law was condemned by experts and journalists’ unions.

Journalists unions in a written statement on Friday, including the Journalists’ Union of Turkey, TGS, the Journalists’ Association and the International Press Institute’s Turkey Committee, said that, “concerned that it may lead to one of the most severe censorship and self-censorship mechanisms in the history of the republic, we call for the immediate withdrawal of this bill, which seems to have been designed to increase the pressure on journalism, not ‘fight against disinformation”.

The new law also allows internet media to register as periodical media publications. This will allow internet media to enjoy some of the benefits of traditional media, such as advertising and press cards, but brings more government control.

Internet media will be required to remove “false” content and must archive their publications, and the government may block access to their websites more easily.

“On the request of the ministries, the President [of the Information and Communication Technologies Authority] may decide to remove the content and/or block access to be fulfilled within four hours regarding broadcasts on the internet,” the new law said, citing national security and public order.

It also creates new regulations on official press cards, after Turkey’s Council of State, the highest administrative legal authority in the country, cancelled the previous law in April, 2021, citing risks to press freedom.

The regulations created by the Communications Directorate, which is under the control of the presidency, allowed the government to cancel the press cards of journalists seen as unfriendly to the authorities, critics claimed.

Since they were introduced, a large number of independent journalists have had their press cards cancelled or their applications for renewal denied.

However, the new law brings little change, beyond creating a new board to decide on press cards. The Press Card Commission will have nine members, which will include government officials, academics and journalists’ unions but five members will come from the Communications Directorate, holding a decision-making majority.

Djokovic Saga, Far-Right Rhetoric and Ethnic Bias Disrupt Online World

Online violations recorded at the end of January show, among other things, that divisive political propaganda and domestic ethnic tensions are having a strong impact on online behavior.

In North Macedonia, internal tensions with the country’s Bulgarian and Albanian communities did not subside and remain one of the main challenges for the new political rulers; the aftermath of the apparently settled Rio Tinto issue and tennis star Djokovic’s Australian Open saga still dominate Serbia’s online environment.

Political clashes in Hungary ahead of the 2022 parliamentary elections continue to intensify, while far-right nationalist propaganda is escalating in Romania. In Bosnia, a banned Republika Srpska holiday and online misogyny were the cause of several online breaches

Ethnic-based violations agitate North Macedonia

In North Macedonia, where a way out of the long-running political crisis still seems to be far off, ethnic and national divisions remain one of the main challenges that the new authorities face in the short run.

The Bulgarian minority there endures much online hate speech due to persistent tensions between the two countries. In this context, the March 2021 attack on North Macedonia’s Eurovision contestant, Vasil Garvalniev, over his dual citizenship, was prescient.

Ethnic tensions also involve the country’s big Albanian minority. Recently, Elida Zylbeari, ethnic Albanian editor-in-chief of the North Macedonian-based Portalb.mk, said he experienced regular discrimination as a member of this ethnic minority. “Being an Albanian journalist in North Macedonia is harder than being a Macedonian journalist,” he remarked.

Ethnic Albanians march in protest following a court decision in Skopje, North Macedonia, 29 January 2021. Photo:

In a case recorded on January 16, an anonymous Twitter user spread false claims about the contents of the North Macedonia dictionary, accusing its editors and curators of allowing words and phrases deemed offensive to Macedonians, while throwing out words seen as offensive towards ethnic Albanian and Roma people. The tweet went viral and sparked an intense debate online.

In a separate Facebook case, on January 24, administrators of a Facebook group, called “I want to tell the latest”, misused the logo of a well-known North Macedonian online media outlet SDK to post pro-Bulgarian and anti-Macedonian rhetoric. SDK suffered similar abuse from another Facebook group in 2018, ahead of the 2018 referendum on EU and NATO membership.

Rio Tinto and Djokovic saga stir Serbia’s online environment

In response to large-scale protests that took place across Serbia, where thousands in Belgrade and elsewhere blocked major transport networks in protest against two massive investment projects involving foreign mining companies, the Serbian government revoked mining giant Rio Tinto’s exploration licences.

At a press conference on January 20, Prime Minister Ana Brnabić insisted that the decision to end Rio Tinto operations in Serbia was final. “We have fulfilled all the demands of the environmental protests and put an end to Rio Tinto in Serbia. With this, as far as the Jadar project and Rio Tinto are concerned, everything is over,” Brnabić told journalists.

But the Rio Tinto issue has not completely disappeared. Many environmental activists still do not trust the government’s promise to scrap the agreement with the mining giant. Brnabic “did not say what we will do with the damage and with the wells that are leaking, she did not say whether she will ban research into lithium and boron. She did not tell us … who from her government persistently pushed the project,” the Association of Environmental Organizations of Serbia, SEOS, said on Twitter.

The Rio Tinto “question”, in fact, still causes various offline and online tensions. On January 22, Marinika Tepić and Dragan Đilas, from the opposition Freedom and Justice Party, received death threats on Twitter in reply to Tepić’s tweet about a Rio Tinto press release. In her tweet, Tepić revealed that the announcement was sent from the email address of the Serbian government, writing also about the government’s seeming lack of transparency on the matter. In response to Tepić’s tweet, a Twitter account most likely bearing a false name threatened the politician, saying she and her children “deserve a bullet”.

Novak Djokovic of Serbia reacts during his men’s singles fourth round match against Milos Raonic of Canada at the Australian Open Grand Slam tennis tournament at Melbourne Park in Melbourne, Australia, 14 February 2021. Photo:

Another viral topic still dominating Serbia’s online environment was tennis star Novak Djokovic and his Australian saga. Deported by the Australian authorities after losing a gruelling visa battle ahead of the Australian Open, he has been the subject of several incidents of online misinformation and fake news.

On January 13, a few Serbian online media outlets presented satirical stories linked to the case as news, significantly spreading fake news on the web. The satirical portal Zicer, for example, spread the story of a Serbian allegedly roasting a kangaroo in Australia in support of the tennis player with the headline: “Serbs from Melbourne roasted kangaroos on a spit in protest for Novak!” The article stated the roasted kangaroo had been brought in front of the hotel where Djokovic was staying to express solidarity with him.

In a similar manner, online media reported as news an article from the Australian satirical news website Double Bay Today, DBT. An alleged survey was published in the article, claiming that more Australians supported the deportation of the Australian Prime Minister Scott Morrison than Djokovic. The text said that 52 per cent of the 5,600 voters favoured Morrison’s deportation, but that the Australian Prime Minister did not want to comment on the results of the poll.

Hungary continues to experience mounting tensions ahead of the April 3 general elections

Hungary is fast approaching the parliamentary elections on April 3, 2022, and the political clashes between rivals do not subside, on the contrary, they intensify considerably as the electoral period approaches.

News of partisan attacks, political scandals, unfounded accusations are an everyday occurrence, leaving Hungarian citizens at the mercy of conflicting news circulating on the web. The latest news concerns an alleged loan of 10.6 million euros that a Hungarian bank would have given to Marine le Pen, French far-right leader, to finance her campaign ahead of the first round of the 2022 French presidential election.

Independent MP Timea Szabo (C), Co-Chairperson of the oppositional Dialogue party unfolds a long sheet of paper containing projects of civic organizations supported by the Norwegian Fund as legislators vote on a draft concerning the transparency of organizations receiving funding from abroad during a session of the Parliament in Budapest, Hungary, 13 June 2017. Photo:

In a case, occurred last January 31, Gábor Jézsó, a Catholic theologian and opposition candidate in the 6th district of Szerencs-Tiszaújváros in Borsod, reported in a video posted on Facebook that he had received a death threat via e-mail. The email contained a photo of a bloody knife and the caption “I will stab you”. Jézsó filed a complaint to the local police against an unknown perpetrator for the incident.

Just days earlier, on January 20, Tímea Szabó, an MP and co-chair of the opposition Dialogue for Hungary Party, Párbeszéd, launched an attack on the reputation of a political rival in a case of disinformation aimed to spread falsehoods and unverified information.  In particular, the politician alleged in a post on Facebook that Antal Rogán, a Cabinet Office minister from the ruling Fidesz party, could be the unnamed man known only by the initial ‘R’ in the so-called Völner-Schadl corruption case. In a press conference, opposition members claimed that Rogán was involved in “the highest level of corruption case in the political history of Hungary” since its democratic transition in 1989-90, “which started with the exposure of the bailiff mafia and the deputy justice minister, and who knows where it will end.”

Episodes of Covid-19 misinformation and the massive circulation of anti-vaccine conspiracy theories also continue to populate the Hungarian digital environment.

Ákos Kovács, a popular Hungarian pop-rock singer and songwriter, alluding to a well-known conspiracy theory, claimed in a video interview, released on January 23, that the coronavirus was “cooked in China and financed with American money”. In a related case of disinformation about a Covid-19 news, a newspaper article started spreading the fake news on Facebook claiming that the Austrian city of Linz was recruiting “manhunters” to capture people who refuse to be vaccinated despite the country’s mandatory vaccination. The article, which showed a police officer snatching a man, quickly went viral and was shared more than 1,200 times on Facebook.

Far-right rhetoric and computer frauds alarm the Romanian online landscape

In recent days, Romania has seen an alarming crescendo of popular and nationalist rhetoric in the public sphere. The Alliance for the Union of Romanians Party, the ultranationalist right-wing party active in both Romania and Moldova, has become the protagonist of numerous controversial episodes, arousing political tensions and ethnic-racial hatred.


Romanian politician George Simion (R), the leader of the extremist party Alliance for the Unity of Romanians (AUR), delivers a speech during a protest held in front of Health Ministry headquarters in Bucharest, Romania, 13 April 2021. Photo: EPA-EFE/ROBERT GHEMENT

First, they organized a protest in front of the Romanian parliament against the possible introduction of the mandatory COVID vaccination passes, and then recently criticized the teaching of Holocaust and sex education in schools. Madalin Necsutu suggests that it is a worrying trend that “the right-wing AUR party in Romania sees anti-Semitism as a way to pick up new voters”.

In a worrying incident, which occurred last January 26, the AUR party started a public campaign on Facebook against Romanian media that it deemed hostile. AUR published a “blacklist” of the Romanian press on its official Facebook page. “AUR is trying to intimidate those journalists who dare to cover in an honest way the actions, intentions and positions of this party”, the chief editor of G4 Media, Cristian Pantazi said. Creating a blacklist in Romania is nothing new as politicians like Corneliu Vadim Tudor, Traian Băsescu, Liviu Dragnea and Florin Cîțu have all followed a similar practice in the recent past.

Phishing scams and computer frauds are omnipresent in Romania’s digital environment. At the same time, and as already pointed out by our latest annual report on digital rights, “Online Intimidation: Controlling Narrative in the Balkans“, Romania also stands out as the country with the highest number of cases (20) involving breaches of citizen data recorded in the last year.

In a first incident, recorded on January 20, the National Company for Road Infrastructure warns that numerous drivers are being targeted by phishing emails, after their email addresses were stolen from the Vignette Website. It is not clear when the original attack happened and how the unknown hackers obtained the users’ email addresses. Meanwhile, it was also revealed on January 26 that the attackers behind the FluBot trojan, which spread globally last year, targeted predominantly Romanian users between January 15 and 18, according to a report released by Bitdefender cybersecurity experts.

Republika Srpska’s holiday and online misogyny cause hostilities in Bosnia

The aftermath of the banned national holiday day in Bosnia’s Serb-dominated entity, Republika Srpska, continue to characterize some violations that occurred in the second half of January in the Bosnian digital environment.

On January 14, a video of the song ‘Jedina Srpska’, performed by the Belgrade Trade Union and Danica Crnogorcevic, a singer of folklore and spiritual music from Montenegro, has been removed from YouTube. The video, which was released to coincide with the celebration of Republika Srpska Day, was removed after several YouTube users complained that it incited ethnic hatred, according to the singer.

Misogynists’ episodes and attacks on activists also continue to occur very frequently in Bosnia. Environmental activists, in particular, across Bosnia face growing threats, pressures and attacks from both citizens and public institutions as evidenced by the case of Mostar, where the municipal court has issued a decision imposing a sentence on all activists active in the region.

An incident, recorded on January 20, involved the public debate on small hydroelectric power plants, which was supposed to provide solutions and ideas for a better environment in Mostar, which was marked by an accident and took place in the Mostar municipality. One of the owners of a small hydroelectric power plant insulted an eco-activist in front of the “Aarhus Center” Association. The discussion went viral and many social media users shared this video, characterizing it as a chauvinistic and vulgar attack.

Kosovo Albanians Join Video Campaign to Support Folk-Dancing Teacher

An online support campaign was launched on Monday after a Kosovo biology teacher was targeted with derogatory comments online after posting videos on TikTok of himself dancing to folk music.

Valon Canhasi, founder of social media agency Hallakate, posted a video of himself dancing to Albanian folk music at his office on Monday and urged others to follow suit to support teacher Lulzim Paci after critics claimed that his actions were inappropriate for an educator.

“I invite all of you to make a video dancing in your office or in your home,” Canhasi wrote on Facebook as he initiated a folk-dance ‘challenge’ under the hashtag #profachallenge.

Teacher Paci, from the town of Vushtrri/Vucitrn, was subjected to sustained criticism on social media after he posted several videos of his folk dances.

Among the critics was ruling Vetevendosje party MP, Fjolla Ujkani, who called on the high school director and the Vushtrri/Vucitrn Education Directorate to fire Pacik for “improper and degenerate acts”, which she claimed contravened the duty of a teacher to instill values in young people.

However Ujkani made a public apology on Monday evening in a Facebook post in which she explained that she had been a student at the high school at which Paci teaches and said “my reaction was aimed at the protection and well-being of the students, and in any case the preservation and protection of the credibility of the school”, but that she did not intend to cause harm to anyone.

In an interview with Kosovo media outlet Koha, Paci tearfully explained how the online harassment he has endured since posting the videos caused him to tell his brother to deny that they are related to avoid embarrassment, and instead to say that “[Lulzim] is my cousin”.

Supporters of Paci argued that he has the right to use his private social media accounts to publish videos of himself dancing, which do not harm anyone.

Kosovo-based media lawyer Flutura Kusari said that “freedom of expression guarantees the teacher the right to publish videos from a private environment”.

After Canhasi posted his video and launched the #profachallenge, a series of Kosovo Albanians including celebrities, politicians and teachers from various regions of the country posted videos of themselves dancing to Albanian folk songs.

Famous Kosovo singer Dafina Zeqiri responded by making a video of herself dancing with the teacher, Paci, and posting it on her TikTok account.

Actress Adriana Matoshi, known for her roles in films such as ‘Zana’ and ‘Martesa’ (‘The Marriage’), who is now an MP from the ruling Vetevendosje party, also recorded a video.

“Don’t stop dancing for anyone… You have done nothing wrong to anyone,” Matoshi wrote on Instagram.

The challenge reached Albania as well, where the first lady and leader of the Socialist Movement for Integration, LSI opposition party, Monika Kryemadhi, also posted a video of herself dancing.

Contested Bosnian Holiday and Hungarian Election Trigger Online Violations

The start of 2022 has seen a rise in political and ethnic tensions, especially in Bosnia and Herzegovina and Hungary.

In Bosnia, the celebration of a hotly debated holiday, Day of Republika Srpska, has exacerbated existing ethnic and political tensions between Bosnian Serbs and Bosniaks. In Hungary, meanwhile, the ruling Fidesz party’s anti-LGBT+ rhetoric and smear campaigns against political opponents have marred the pre-election period.

After the election of a new prime minister in North Macedonia, old political tensions there also remain at the fore. In Serbia, meanwhile, journalists still face online death threats. Phishing scams continue to disrupt Romania’s digital space.

Banned Bosnian Serb holiday sparks ethnic hatred online

Following the commemoration on January 9 of the banned national holiday day in Bosnia’s Serb-dominated entity, Republika Srpska, marking the 30th anniversary of the RS’s existence, political tensions in the country have worsened.

In November 2018, Bosnia’s Constitutional Court declared, for the second time since a 2015 ruling, that the Republika Srpska’s national holiday was unconstitutional because, among others, it discriminated against non-Serbs in Republika Srpska, mainly because January 9 is also a Serbian Orthodox religious celebration, St Stephen’s Day.

The holiday has triggered a series of incidents in both Bosnian and Serbian cities, including protests by Bosnian citizens living abroad.

On the same day the celebration was held, videos reporting the holiday published by Bosnian online media attracted numerous comments that included hate speech, calls for violence and warmongering rhetoric.

Serbian flags fly on the Serbian Government building in Belgrade, Serbia, 15 September 2020. Photo: EPA-EFE/MARKO DJOKOVIC

On January 10, a Facebook page entitled Green Berets of Bosnia and Herzegovina, named after an ethnic Bosniak paramilitary organization, founded in Sarajevo in 1992, launched an appeal for new members.

Gorica Dodik, the daughter of Bosnian Serb leader Milorad Dodik, was also targeted on January 11 with misogynistic remarks, sexist insults and hate speech over her Twitter posts on the RS holiday. Her Twitter account was also suspended for a period from the San Francisco social network.

Attacks on Orban rivals and LGBTQ+ community rise in Hungary ahead of elections

Ahead of the parliamentary election in Hungary on April 3, which Prime Minister Viktor Orbán for the first time since 2010, when he was elected PM, risks losing, the political balance in Hungary is clearly wavering.

What happens in April is likely to be a pivotal moment in the post-communist history of Hungary.

To complicate matters, on April 3, Hungarian voters will also vote in a controversial government-initiated referendum on LGBTQ+ rights. Orbán and his ruling Fidesz party have said the referendum on “child protection”, which will contain five questions, aims to preserve children from homosexual and transgender influences promoted by media inside and outside Hungary.

“If a man and a woman live together, get married and children are born, we call this a family. This is not a question of human rights, we are just calling things by their true name,” Orbán told an interview for Magyar Nemzet.


So-called Progress Flags, aimed at protesting against Hungary’s recently passed so-called Anti-Paedophilia Act, fly at the Hofvijver in The Hague, the Netherlands, 27 June 2021. Photo: EPA-EFE/JEROEN JUMELET

On January 11, pro-government media outlets backed Fidesz’s anti-LGBT+ rhetoric about reaffirming the value of the traditional family. Fidesz presented misleading data report from UK, Sweden and Spain, wrongly suggesting that masses of children in these countries are being subjected to sex reassignment surgeries.

Campaigns targeting Orban’s opponents remain widespread within Hungary. Following a media campaign launched on December 20 against Imre Mártha, the head of Budapest’s public utility companies, accused of numerous allegations which turned out to be false, more misinformation has targeted the mayor of Budapest.

On January 15, a reporter published photos claiming that mayor Gergely Karácsony, a member of the Hungarian green party Dialogue for Hungary, had parked his car in a no-parking zone. The photos, however, did not prove that parking was prohibited in that area.

False accusations and misogyny mark North Macedonian digital space

After almost two months of political crisis and the parliamentary appointment on January 17 of the Social Democrat leader Dimitar Kovacevski as the new prime minister, political friction in North Macedonia remains strong.

Outgoing PM Zoran Zaev, accused of not having fully fulfilled his promises of internal reform and of poor management of the Covid crisis, remains a target of numerous attacks.


New leader of SDSM (Social Democratic Union of Macedonia) Dimitar Kovacevski puts on the protective mask after receiving credentials for new government from the North Macedonia’s President in Skopje, Republic of North Macedonia, 29 December 2021. Photo: EPA-EFE/GEORGI LICOVSKI

On January 3, in a case on Twitter, an anonymous user tweeted that Zaev had applied for unemployment benefit after his resignation. The tweet, which featured an old photo of Zaev visiting the State Employment Agency, went viral and sparked heated debate.

In another incident, on Facebook, a woman from Gevgelija, in southeast North Macedonia, was targeted by hate speech and misogynistic comments after a news portal published an article about the vast amount of books, 438 in total, that she had read in one year. Many users have gone so far as to wish her death.

Journalists facing more online aggression and threats in Serbia

“Journalists in Serbia are one of the most frequently targeted parties online. In 30 of the 111 cyber rights violations logged in our database (38 if you include investigative journalism) journalists were subjected to online abuse and intimidation – with an alarmingly high number receiving death threats,” our latest annual digital rights report Online Intimidation: Controlling the Narrative in the Balkans, notes.

Serbia has the highest rate of online attacks on journalists in the Balkans. In many cases, the motive behind these attacks seems to be to undermine independent journalism. Investigative journalists remain targeted by politicians and pro-government media outlets with the aim of concealing wrongdoing and evidence that could embarrass those in power.

Following a case on October 15, when the pro-government tabloid Informer, supported by other tabloids, published an article on Nova.rs journalist Pero Jovović, who was then sent death threats on social media due to a post in which he showed the emoticon of the flag of Kosovo, he was the subject of another episode. Over the past holidays, his Instagram profile again received numerous death threats that users sent via private messages. The cause of the online aggression is not entirely clear.

Phishing scams endemic in Romania’s digital landscape

Romania’s online environment continue to register phishing scams and other online frauds. The most populous country in the region seems particularly exposed to large-scale frauds, involving thousands of citizens.

Romania’s Directorate for Investigating Organised Crime and Terrorism, DIICOT, already arrested several organised criminal groups that had been operating locally and internationally between November 2020 and July 2021, all active in the commission of cyber frauds, phishing and other online scams.


Mihai, senior broker, work on his desk at the Tradeville headquarters in Bucharest, Romania, 22 December 2021 (issued 30 December 2021). Photo: EPA-EFE/ROBERT GHEMENT

On January 11, Bitdefender cybersecurity experts working in one of the leading technology companies in the country warned that a phishing scam, first detected in July 2021, was now targeting email users, mainly in Romania, Croatia and Hungary.

Hackers send emails that appear to respond to messages previously sent by the users in which they pretend to have obtained their passwords and even intimate images of the users, and demand 1,200 euros in Bitcoin. According to Bitdefender, more than half of the emails addressed to Romanian users were sent from local IP addresses.

In another incident, which came to light in January, the district court of Oradea dismissed a civil suit filed in late 2020 by Calin Moldovan, the administrator of a gaming Facebook group, in which he demanded 4,000 euros in moral compensation from five other members of the “True Gamers”, who he accused of taking over the Facebook group.

The five users later posted pornographic images on the group in order to force Facebook to permanently suspend “True Gamers”.

But the court found that the plaintiff could not prove that the five defendants were behind the Facebook users involved in the hostile takeover. This decision is one of the first in which a court rules that a Facebook profile cannot be used to identify a person without reasonable suspicion and it could jeopardize future actions taken by the Romanian authorities against Facebook users.

Southeast Europe Civil Society Must Cooperate to Combat Digital Violations

Digital rights violations have been rising across Southeastern Europe since the beginning of the COVID-19 pandemic with a similar pattern – pro-government trolls and media threatening freedom of expression and attacking journalists who report such violations.

“Working together is the only way to raise awareness of citizens’ digital rights and hold public officials accountable,” civil society representatives attending BIRN and Share Foundation’s online event on Thursday agreed.

The event took place after the release of BIRN and SHARE Foundation’s report, Digital Rights Falter amid Political and Social Unrest, published the same day.

“We need to build an alliance of coalitions to raise awareness on digital rights and the accountability of politicians,” said Blerjana Bino, from SCiDEV, an Albanian-based NGO, closely following this issue.

When it comes to prevention and the possibilities of improving digital competencies in order to reduce risks about personal data and security, speakers agreed that digital and informational literacy is important – but the blame should not be only put on users.

The responsibility of tech giants and relevant state institutions to investigate such cases must be kept in mind, not just regular cases but also those that are more complicated, the panel concluded.

Uros Misljenovic, from Partners Serbia, sees a major part of the problem in the lack of response from the authorities.

“We haven’t had one major case reaching an epilogue in court. Not a single criminal charge was brought by the public prosecutor either. Basically, the police and prosecutors are not interested in prosecuting these crimes,” he said. “So, if you violate these rights, you will face no consequences,” he concluded.

The report was presented and discussed at an online panel discussion with policymakers, journalists and civil society members around digital rights in Southeast Europe.

It was the first in a series of events as part of Platform B – a platform that aims to amplify the voices of strong and credible individuals and organisations in the region that promote the core values of democracy, such as civic engagement, independent institutions, transparency and rule of law.

Between August 2019 and December 2020, BIRN and the SHARE Foundation verified more than 800 violations of digital rights, including attempts to prevent valid freedom of speech (trolling of media and the public engaged in fair reporting and comment, for example) and at the other end of the scale, efforts to overwhelm users with false information and racist/discriminatory content – usually for financial or political gain.

The lack of awareness of digital rights violations within society has further undermined democracy, not only in times of crisis, the report reads, and identifiers common trends, such as:

  • Democratic elections being undermined
  • Public service websites being hacked
  • Provocation and exploitation of social unrest
  • Conspiracy theories and fake news
  • Online hatred, leaving vulnerable people more isolated
  • Tech shortcuts failing to solve complex societal problems.

The report, Digital Rights Falter amid Political and Social Unrest, can be downloaded here.

Digital Rights Falter amid Political and Social Unrest

When the global pandemic halted our “offline” lives, we moved meetings, dinners and parties, shopping, protests to the online sphere. As we sought comfort, education, business and social life in the digital, our only public sphere also became overwhelmed with content designed to manipulate and misinform citizens.

Journalists, civil society activists, officials and the general public have faced vicious attacks – including verbal abuse, trolling, smear campaigns and undue pressure to retract content – in response to publishing information online. Many of our data were stolen, and our privacy endangered. Surveillance flourished.

In the period from August 2019 until December 2020, BIRN and the SHARE Foundation were gathering information on digital rights violations in Bosnia and Herzegovina, Croatia, North Macedonia, Hungary, Romania and Serbia, and our monitoring shows violations of digital rights continued at an alarming rate in all six countries.

As all six held elections during this period – local, general and/or presidential – our findings raise serious concerns about how the digital arena has been effectively hijacked to propagate fake news, hate-fuelled conspiracy theories and misinformation in support of offline efforts to sabotage democratic processes.

Just when people needed factually-correct information and governments needed close scrutiny to ensure high standards in public life, cyberattacks were launched against state bodies and the public were overwhelmed with false information and discriminatory content designed to manipulate voting and/or stoke hatred of particular groups.

Governments, on the other hand, used the pandemic to curb freedom of expression, abused health data, while many public institutions failed to meet standards of free and open internet.

During this period, BIRN and the SHARE Foundation verified more than 800 violations of digital rights including efforts to prevent valid freedom of speech (trolling of media and general public engaged in fair reporting and comment, for example) and at the other end of the scale, efforts to overwhelm users with false information and racist/discriminatory content – usually for financial or political gain.

Most online violations we monitored were under the category of pressures because of expression and activities (375) while the fewest violations monitored were classified as holding intermediaries liable (0).

Illustration: BIRN/Igor Vujcic

Action was taken in just 21 per cent of cases, which usually entailed – depending on the type of violation – removing articles or deleting posts and/or comments by the general public and public sector organisations. During the COVID-19 crisis, we saw a rise in arrests of citizens accused of causing panic by publishing fake news on social media. Hungary, Serbia, Bosnia and Herzegovina were leading in this trend. Legal action, including arrests, penalties or other court action, was taken in less than 0.5 per cent of all monitored cases.

It is important to note that just as some violations included attempts to stifle free speech and frustrate freedom of expression through publishing falsehoods, not all legal actions launched to apparently hold intermediaries liable were legitimate attempts to protect freedom of speech. Some were cynical attempts against the public interest to block the publication of proven facts.

All these violations have contributed to an atmosphere dominated by fear and hatred with already vulnerable communities – such as LGBT+, groups identifying as female, migrants, particular ethnic groups – becoming subjected to worse and more frequent abuse, leaving them ever more isolated from support networks.

Those guilty of using the digital space to undermine democracy, intimidate others from publishing the truth or to spread malicious falsehoods operate with impunity, not least because there is no meaningful sense in the region of what constitutes digital rights – never mind the desire to or means to protect those rights.

Our report is the first effort on the regional level to map current challenges in the digital sphere and aims to fill in the gaps in the research. We took an interdisciplinary approach and looked at the problems from the legal, political, tech and societal angle, as an attempt to show that the problems and solutions to these violations should also be holistic and overarching. We also want to highlight these issues, as the lack of awareness of digital rights violations within society further undermines democracy, not only in times of crisis.

We don’t see the internet only as open and transparent but also see digital evolution as a set of mechanisms and tools that have great potential to serve the needs of people, and let’s not forget that internet access has proved indispensable in times of crisis such as in the COVID-19 pandemic.

We hope this report will serve not just for stock taking but be understood as a map showing what and how to further advance our rights, and also as an invitation to everyone to join forces in making our digital world healthy, too.

Marija Ristic is regional director of Balkan Investigative Reporting Network. Danilo Krivokapic is director of SHARE Foundation.

Report “Digital Rights Falter amid Political and Social Unrest” can be downloaded here.

As part of our Platform B, we are also hosting a discussion with policy makers, journalists and civil society members around digital rights in the Southeast Europe. Register here.

Platform B: Amplifying Strong and Credible SEE Voices

Together with our partners, BIRN is launching a series of online and offline events aimed to amplify the voices of strong and credible individuals and organisations in the region that promote the core values of democracy, such as civic engagement, independent institutions, transparency and rule of law.

As a primarily media organisation, we want to open space and provide a platform to discuss and reshape our alliances in light of the challenges facing democracies in South-East and Central Europe.

This effort comes at a critical time when the region is seeing several troubling trends: centralized power, reduced transparency, assaults on media, politicized judiciaries, unchecked corruption, online violations and social polarization – all amidst heightened geopolitical tensions and deep divisions in Europe.

Due to the ongoing pandemic, Platform B event series will be organised in accordance with all relevant health measures. As the situation improves, we hope to be able to host some of the events in BIRN spaces in Sarajevo and Belgrade, and elsewhere in the region.

The Platform B will be an opportunity for individuals and groups to meet monthly on selected topics.

Illustration: Marta Klawe Rzeczy

Opening event: Digital Rights Falter Amid Political and Social Unrest: What Now?

Date: 1 July, 2021 (Thursday)

Time: 15.00, CET

At this event, BIRN and SHARE Foundation will discuss its annual digital rights report,together with other members of the newly established SEE Network, talking about the key trends concerning the digital ecosystem.

We monitored digital rights in Bosnia and Herzegovina, Croatia, Hungary, North Macedonia, Romania and Serbia and collected more than 1500 cases of online violations.

In Southern and Eastern Europe, where online disinformation campaigns are endangering guaranteed individual freedoms, and while the decline in internet safety has become a worrying trend, citizens with poor media and digital illiteracy have been left without viable protection mechanisms.

The event participants will have an opportunity to discuss and hear reflections from representatives of: EDRi, Zasto ne?, Citizen D, Homo Digitalis, SCiDEV, Partners Serbia, Metamorphosis, Atina NGO, Media Development Center.

More information and registration

Second event: Freedom of Information in the Balkans: Classified, Rejected, Delayed

Date: July 15, 2021 (Thursday)

Time: 14.00, CET

The global pandemic has been used as an excuse for many Balkan states to not fully implement freedom of information laws, leaving the public in the dark.

Transparency has been another victim of the COVID-19 pandemic.

While on paper, freedom of information laws are up-to-date in almost all countries in the region, implementation is patchy at best and has grown worse since governments clamped down on the flow of information with the onset of the coronavirus.

Together with journalists, public information officers and colleagues from Open Government Partnership we will reflect on the findings of BIRN’s tracking institutional transparency report and offer recommendations on how to make our institutions open and accountable.

Registration form will be available here soon.

Events in August and in the fall will focus on investigative journalism and gender justice.

Facebook, Twitter Struggling in Fight against Balkan Content Violations

Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.

Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.

“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.” 

Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.

Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.

But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.

The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.

Among BIRN’s findings are the following:

  • Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
  • One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
  • Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
  • Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
  • The survey showed that people were more ready to report content targeting them or minority groups.

Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints. 

The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.

Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.

Half of the posts containing hate speech remain online


Graphic: BIRN/Igor Vujcic

Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.

In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.

In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.

Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.

The debate over the benefits and the dangers of social media for 21st century society is well known.

In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators. 

There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.

In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.

“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”

Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.

“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.

“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”

In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct. 

The survey asked for the language of the disputed content, who was the target and who was the author, and whether or not the report was successful.

Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence. 

The survey showed that people were more ready to report content targeting them or minority groups.

According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases. 

Roughly half of reports of targeted harassment resulted in removal.

Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous. 

“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”

 Content removal – censorship or corrective?


Graphic: BIRN/Igor Vujcic.

According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules. 

On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.

In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.  

As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.

One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content. 

“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”

Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.

Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.

Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.

“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”

Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.

It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.

BIRN methodology 

BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.

The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.

“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said. 

Social networks often remove content themselves, in what they call a ‘proactive approach’. 

According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.

“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”

“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”

There is no available data, however, when it comes to content in a specific language or country.

Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”

Whatever the tools, the results are sometimes highly questionable.

In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s. 

Facebook determined that Bursac’s post had violated “community standards,” local media reported.

Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.

“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”

Lack of language sensibility

Illustration. Photo: Unsplash/The Average Tech Guy

SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.

“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.   

Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”

“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”

The European Parliament has also stressed the importance of a combined assessment.

“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. 

“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”

Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.

“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.

“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”

Failing in non-English

Illustration. Photo: Unsplash/Ann Ann

Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”

“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”

For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.” 

“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.

The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.

“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.

The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.

“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said. 

“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”

Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked. 

“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.

Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.

 Machine learning

As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines. 

The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.

When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”. 

While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
 
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”

According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.

In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category. 

Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”

“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.

Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.” 

“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN. 

“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”

Online Impersonation is a Crime, Romanian Court Rules

Romania’s High Court of Cassation and Justice ruled on Tuesday that pretending to be someone else on Facebook is an offence punishable under the country’s criminal law.

The ruling arose from the case of a man sentenced to three years and eight months in prison for blackmail, digital fraud and breach of privacy for posting intimate images of his ex-girlfriend on a social network and opening pornography site accounts in her name.

According to the indictment, the man created the false social network account after threatening his former girlfriend in December 2018 that he would publish several videos of them having sex, as well as pictures in which she appeared naked, if she did not resume the relationship with him.

The case reached the High Court after the Court of Appeal in the Transylvanian city of Brasov in central Romania asked for its opinion about whether “opening and using an account on a social network opened to the public” to publish real “information, photographs, video images, etc.” could be considered digital fraud as defined by article 325 of the criminal code.

The High Court concluded that “opening and using an account on a social network open to the public, using as a username the name of another person and introducing real personal data that allows for that person’s identification” meets the requirements to be considered as digital fraud.

The Brasov court referred the case to the High Court because other Romanian courts had previously reached different and contradictory conclusions in similar cases.

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now