Child Pornography Offences Increase in Romania During Pandemic

The Romanian Directorate for Investigating Organised Crime and Terrorism, DIICOT said on Friday that there has been an increase in the detected production and distribution of pornographic material featuring minors, as freedom of movement limitations bring about by the pandemic led to a dramatic increase in online interactions.

“The number of pornographic materials with minors detected by prosecution bodies and even by the private sector is on the rise, which demands that we concentrate our efforts in combating this kind of criminal activity,” the DIICOT said in its report for 2020.

The report differentiates between content produced with the participation of the perpetrators and that which has been “self-generated” by minors themselves.

Material self-generated material became more prevalent in 2020, when a growing number of offenders convinced or blackmailed the victims into filming or photographing themselves engaging in obscene acts. In most of such instances, the minors were approached online.

Prosecutors also observed “an upsurge” in the use of livestreaming services among minors who produce pornography motivated by the “significant financial gains” they obtain.

In February 2021, DIICOT has already reported five child pornography cases.

On February 2, a suspect was arrested in the eastern county of Buzau for allegedly approaching a female minor through a social network from whom he obtained several pictures and videos of a sexual nature that he then distributed online.

On February 11, another suspect was apprehended in the north of Romania on charges of blackmail, child pornography and corrupting a child. According to prosecutors, between August 2020 and February 2021 the suspect recruited an unspecified number of minors online to send pornographic content to him.

The suspect then used the images as tools of blackmail to threaten the children to supply him with more material, prosecutors alleged. He has been remanded in custody for 30 days and will face trial.

PiS-Friendly High Courts in Poland Conspire to Restrict Access to Public Info

The chief justice of Poland’s Supreme Court, a presidential appointee, has asked the government-controlled Constitutional Tribunal to declare key elements of the law on access to public information as unconstitutional, which experts warn could bring about an end to government transparency.

Since being elected in 2015, the nationalist-populist Law and Justice (PiS) government has suffered a series of scandals – some uncovered through the use of transparency laws – concerning the appointments of relatives and friends of PiS politicians to public office, as well as the favouring of friendly foundations, media and other institutions in the distribution of public funds.

Observers speculate that another intended effect of the request could be to shield from media scrutiny certain state companies, which PiS is increasingly using to achieve political ends, such as taking control of critical media.

It emerged Wednesday, via a personal tweet from Miroslaw Wroblewski, a lawyer and director of the constitutional law team at the office of the Polish Ombudsman, that Malgorzata Manowska, the chief justice of the Supreme Court, had formally requested on February 16 that the Constitutional Tribunal assess the constitutionality of several aspects of Poland’s law governing access to public information.

Manowska, who was appointed to her position last year by the PiS-allied President Andrzej Duda, claimed in her submission that the law doesn’t specify enough the scope of concepts such as “public authorities”, “other entities performing public tasks”, “persons exercising public functions” and “the relation to the exercise of public functions”. As a consequence, she wrote, the concepts are stretched too broadly in an illegal manner, meaning too many public bodies and officials are held accountable.

The chief justice also challenged the obligation of state bodies to provide information about public officials, “including their personal data and information belonging to their private sphere”, which she argued is both unconstitutional and contrary to the European Convention on Human Rights. Yet this could also refer, potentially, to information about supplementary sources of income or conflicts of interest that exist among the official’s family.

Wroblewski took to social media to comment on Manowska’s request: “I wouldn’t be surprised if we’ll remember this date [February 16] as the end of government transparency”.

Krzysztof Izdebski, from the open-government watchdog Fundacja ePanstwo, anticipates that public institutions which have been taken to court by journalists for not providing requested public information on time will now start filing requests to suspend their trials until the Constitutional Tribunal rules. “This request is also meant to have a chilling effect on citizens and journalists,” Izdebski commented.

Manowska was nominated to the Supreme Court by the National Council of the Judiciary, KRS, a body whose independence from the ruling party has been questioned by both the Court of Justice of the European Union and the Polish Supreme Court itself. Although an experienced judge and former dean of the National School of Judiciary and Public Prosecution, she has also served as a deputy justice minister under Justice Minister Zbigniew Ziobro, raising concerns about her independence from him and PiS.

In turn, the Constitutional Tribunal was the first body in the justice system that PiS sought to establish political control over when it took power, while its President, Julia Przylebska, is known to be a personal friend of PiS leader Jaroslaw Kaczynski.

Slovenia Criticised for Suspending National News Agency’s Funding

The Slovenian Government Communication Office, UKOM, has faced strong criticism after it announced this week that it will suspend payment for the services provided by the Slovenian Press Agency, STA in January – the second time it has suspended the state-funded STA’s payments in recent months.

“This is the most blatant example of the goals and strategies of [Prime Minister] Janez Jansa to get all the media under control,” prominent Slovenian investigative journalist Blaz Zgaga told BIRN.

“The Slovenian Press Agency is actually the backbone of the Slovenian media system because it covers many events in politics and society that other media do not cover… everything depends on the STA,” Zgaga said.

He added that if the agency falls under political control, right-wing premier Jansa will have a greater influence on all the other media that depend on the material provided and events organised by the STA.

UKOM told BIRN on Thursday that it has not stopped funding the STA but it has only “refused to pay the invoice that STA Director Bojan Veselinovic sent to UKOM for reasons unknown to us”.

It said that “as of 31 December 2020 all the contracts concluded between UKOM and STA expired”.

Veselinovic has argued that the budget allocations for funding the STA had already been set out by the government for this year, regardless of whether a contract with the founder has been signed or not, and that all required documents are always available to the government and relevant supervisory bodies.

UKOM also told BIRN that it rejects “any bizarre allegations of anyone ever exerting pressure on STA editors or journalists”.

It said it had asked STA director Veselinovic to “publish the names of the officials who are believed to have pressured the editors or journalists, because that would be unacceptable. So far, we have not received any reply.”

UKOM also refused to pay monthly instalments for the public service provided by the STA for October and November.

Veselinovic responded by arguing that the budget allocations for funding the STA had already been set out by the government for this year, regardless of whether a contract with the founder has been signed or not, and that all required documents are always available to the government and relevant supervisory bodies.

The Slovene Association of Journalists, DNS, the European Alliance of News Agencies, EANA, and the International Press Institute, IPI, voiced support for STA.

“The latest denial of funding of STA by the Slovenian government is yet another politically-motivated attempt to destabilize the financial footing of the country’s press agency. Payment should be resumed immediately,” the IPI wrote on Twitter.

BIRN asked UKOM to respond to its critics’ accusations but did not receive a reply by the time of publication.

STA employees said in a statement on Thursday that the UKOM’s decision is another “attempt to dismantle and destroy” the agency and that they “cannot agree to any diktats about how and what to report”.

They also said that a group of individuals, about whom it was “clear at first glance which political party they belong to”, have even announced the establishment of an alternative national news agency “which would be more Slovenian and objective than the STA”.

Last month Slovenian media reported the establishment of the new National Press Agency, NTA, whose founders are close to the Jansa’s right-wing Slovenian Democratic Party, SDS.

Zgaga said that “we can imagine one scenario, in which they [the government] will cut off the STA’s funding, the STA will go down, then they will give a lot of money to this new agency”.

Slovenian and international press freedom watchdog organisations have already accused Jansa of using the coronavirus pandemic to restrict media freedoms.

His policies could attract greater international attention in the second half of this year, when Slovenia will hold the presidency of Council of the European Union.

But in a letter on Friday to the president of the European Commission, Ursula von der Leyen, Jansa said the allegations that he has been restricting media freedoms are “absurd”.

Attack on Kosovo Investigative Journalist Condemned

International and local Kosovo press associations have condemned the attack against an investigative journalist who was brutally beaten near his house in Fushe Kosove/Kosovo Polje, at around midnight on Wednesday.

Visar Duriqi, a journalist of the local Kosovo online news portal Insajderi as well as the author and producer of local show INDOKS, was assaulted by three unidentified individuals at around midnight after a TV debate.

“Three people had been waiting for the journalist Duriqi, in front of the entrance of his apartment. He was attacked as soon as he got out of his car,” Insajderi reported on Thursday.

Duriqi has authored several episodes on crime and corruption on Insajderi’s show, INDOKS.

The police are investigating the case.

“It is suspected that three masked persons attacked the victim with fists at the entrance of his apartment, causing bodily injuries. The victim was sent to the UCCK (University Clinical Center of Kosovo in Pristina), for necessary medical treatment and then he was discharged,” a police statement read.

The Association of Journalists of Kosovo, AJK, condemned the attack as a threat to freedom of “speech and media” and called on the authorities “to investigate the motives … and shed light over this case”. The AJK pledged also to inform domestic and international stakeholders.

On Thursday, the European Center for Press and Media Freedom, ECPMF, on Twitter also condemned “this brutal attack on journalist Visar Duriqi” and urged Chief Prosecutor Aleksander Lumezi “to urgently and thoroughly investigate and hold the criminals responsible to account”. 

Flutura Kusari, legal advisor at ECMPF, wrote on Facebook that “violence against journalists in Kosovo is on the rise” and added that it can only be curbed if the punishments of attackers include “harsh sentences”, similarly to when a politician is attacked.

Online Media Needs More Self-Regulation – not Interference: BIRN Panel

State regulation of the media should be limited, with self-regulation strengthened and prioritized, journalists and media experts from the region told the third and final online public debate on online media regulation held by BIRN on Wednesday.

Panelists representing different prominent media in the region as well as legal experts focused on potential solutions for self-regulation and regulation of online media, considering the growing pressures that media in the region face, such as speed, clicks and disinformation.

The panelists agreed that the government should not have mechanisms to interfere in the media content but should provide regulations to ensure a better environment for journalists to work in.

Authorities should also strengthen the rule of law in terms of copyright and censorship of hate speech, pornography, and ethical violations, they said.

Flutura Kusari, legal advisor at the European Centre for Press and Media Freedom, said Kosovo has a good self-regulatory non-governmental body, the Press Council, which consists of around 30 local media representatives, all holding one another accountable.

However, self-regulation is limited due to the lack of rule of law all over the region, Kusari explained.

Geri Emiri, editor at the Albanian media outlet Amfora, explained that most of the media in Albania have self-regulatory mechanisms whereby anyone can complain of abuses.

“It is not enough if the will to self-regulate, after a complaint by a citizen, does not exist,” Emiri said, explaining the difficulties media in Albania face from the government trying to create legal methods that he said lead to “censorship”.

Emiri was referring to legislation proposed by Socialist Prime Minister Edi Rama as an “anti-defamation package”, which aimed to create an administrative body with powers to order media to take down news reports that “infringe the dignity of individuals”, under the threat of heavy fines. Critics said the law could have a chilling effect on media freedom due to its broad terms.

Goran Mihajlovski, editor-in-chief at the Macedonian media outlet Sakam da kazam, agreed that government bodies would not be right for media regulation “due to changes in government and [because] the body would politically be appointed and would open doors to more political influence and pressure on the media”.

Jelena Vasic, project manager at KRIK, Serbia, said professional media can assist in regulating the media enviroment in the region and in curbing fake news by “debunking” and fact-checking news that is already published.

Vasic said he was aware that debunked fake news often does not reach all of the audience that the fake news has already reached, but added that, “even if half of the people who read the fake news are now faced with the facts, a good change has been made”.

Alen Altoka, head of digital media at Oslobodjenje, Bosnia, said one of the main problems behind the increase in fake news in the region is profit-based media organisdations, suggesting that Google should not allow ads for fake news portals as one solution.

BIRN engages in fact checking and debunking fake news, as well as other digital rights violations in the region via its Investigation Resource Desk platform, BIRD, monitoring tool.

BIRN held its first online public debate within the Media for All project, funded by the UK government, in September 2020, followed by another debate on the topic in late December.

Turkish Court Overrules Erdogan’s Power Grab Over Anadolu Agency

In a surprise setback to the authoritarian President, the Turkish Constitutional Court ruled on Wednesday that the Presidency’s move to take direct control over the Anadolu Agency is against the constitution.

The news agency is constitutionally an autonomous institution with a budget supplied by the state. But after the country introduced an executive presidential system in 2018, concentrating power in the head of state, it was put under the direct control of the Communications Directorate of Turkish Presidency by presidential decree.

“The control of Anadolu Agency by a directorate under the Turkish Presidency does not accord with Anadolu Agency’s autonomy and may harm the objectivity of its publications,” the court said in a statement.

The statement added that such direct control undermined the institutional independence of the agency’s organisation and human resources.

Opposition parties and media experts accused President Recep Tayyip Erdogan of turning Anadolu Agency into a government mouthpiece.

The Constitutional Court made its decision following the submission of a complaint by the main opposition Republican People’s Party, CHP. Only two judges voted against, while the other 13 members voted to scrap the presidential takeover.

Anadolu Agency was established by Turkey’s founding father, Mustafa Kemal Ataturk, in 1920, mainly to tell the world about the Turkish War of Independence that followed the end of World War I and the collapse of the Ottoman Empire.

In 1925, the agency was made a private company with a view to making it a modern and independent media outlet, but with funding secured by the state.

A century on, it is now a global news agency with publications in 13 different languages, including Arabic, English, French and Russian.

The new agency has also become an important instrument in Turkey’s application of “soft power” foreign policy activism in the Balkans. It operates in Bosnian/Croatian/Serbian, Albanian and Macedonian with regional offices in Sarajevo, Bosnia and Skopje, North Macedonia.

Abuse of Journalists Rarely Punished by Serbian Courts: Report

A report analysing court cases for crimes against journalists, published on Tuesday by the Belgrade-based Slavko Curuvija Foundation and Centre for Judicial Research, says that on average, only one in ten criminal complaints about threats to or attacks on journalists results in a court verdict.

The report, entitled ‘Protection of Freedom of Speech in the Judicial System of Serbia’, analysed 20 court cases dating from 2017 and 2020 that involved the alleged crimes of endangering someone’s security, general endangerment, persecution, violent behaviour and inciting ethnic, racial and religious hatred and intolerance.

“Most reports of acts against journalists don’t go any further than the prosecutor’s office. Only every tenth reported case ends with a final court decision,” the report says.

The report claims that when deciding not to press charges, “it seems that the prosecution did not consider the specifics of these cases carefully and attentively enough”.

It also says that in cases where there have been convictions, courts imposed suspended sentences in eight of them and a year of home detention in one case, while the only custodial sentence imposed was six months in jail.

The report also analyses 305 misdemeanour cases from 2017 and 2019 in which journalists, editors, publishers and media outlets were sued.

It says that most cases drag on for too long, meaning that a final judgment is often made too long after the initial incident for it to provide adequate legal satisfaction for the defendants or plaintiffs in terms of protecting their rights.

“In by far the largest number of cases, the process lasts longer than a year,” the report says.

It partly blames delays in sending out copies of verdicts, which in turn delays appeals.

The report also says that some media publish articles without properly checking the facts and the source of the information.

“Compensation is often awarded for using the image of the wrong person to illustrate an article,” it says.

OSCE Chides Kosovo for Preventing Entry of Serbian Journalists

The OSCE Mission in Kosovo has said it is “concerned” about the recent denial of entry to the country by journalistic crews from Serbia at the Jarinje crossing point.

“Such actions not only contribute to the difficulties that journalists face in conducting their work, but also send a negative message about press freedom and the tolerance for a pluralistic media landscape,” OSCE Kosovo wrote on its Facebook account.

A crew for the Radio Television Serbia TV Show Right to Tomorrow was banned from entering Kosovo on Thursday. The show’s editor, Svetlana Vukumirovic, told RTS they were banned from entering because they did not announce their arrival 72 hours earlier.

“No one ever asked the show’s crew or other journalists to announce themselves in such a way before,” Vukumirovic told RTS.

Earlier, an RTS journalistic team tried to enter Kosovo on February 15, but were also denied permission. Four days later, they were officially banned from entry. The Journalists’ Association of Serbia, UNS, in a press release condemned an “attack on press freedom”.

The Association of Journalists of Kosovo and Metohija, which represents Kosovo Serb media, organised a protest on the border line on Wednesday. Association president Budimir Nicic said stopping RTS journalists from entering Kosovo was “classic harassment”.

“This is a classic harassment, this is a classic threat to human rights and media freedoms, this is a violation of all civilization values ​​and norms, and must stop,” Nicic said at the protest.

The Serbian government’s liaison officer with Pristina, Dejan Pavicevic, told the UNS that only senior state officials had an obligation to announce their arrival in advance – not journalists.

“This only applies to top government officials … We will now ask Brussels to take concrete steps because this is a flagrant violation of the [2013 Brussels] Agreement [between Belgrade and Pristina], on freedom of movement and the right of journalists to freedom of reporting,” Pavicevic told UNS.

The Independent Journalist Association of Serbia, NUNS, warned “that the journalistic profession does not serve for political undercutting and collecting points, but to report honestly and credibly on events that are of public importance”.

Kosovo and Serbia reached an agreement about officials’ visits in 2014 that included a procedure for announcing visits of officials from one country to the other. However, both countries have continued stopping officials from entering from the other country, often without explanation.

Pandemic Leads to Rise in Cyber Abuse of Children in Albania

Thousands of children in Albania are at greater risk of harm as their lives move increasingly online during the COVID-19 pandemic, UNICEF and local experts warn.

The closure of the country in March last year due to the spread of the novel coronavirus, including a shift to online schooling, has led to an increase in the use of the Internet by children, some of them under the age of 13.

According to a 2020 UNICEF Albania study titled “A Click Away”, about 14 per cent of children interviewed reporting experiencing uncomfortable online situations, while one in four said they had been in contact at least once with someone they had never met face-to-face before.

The same study said that two in 10 children reported meeting in person someone they had previously only had contact with online, and one in 10 children reported having had at least one unwanted sexual experience via the internet.

A considerable number of those who had caused these experiences were persons known to the children.

UNICEF Albania told BIRN that, after the closure of schools and the introduction of social distancing measures, more than 500,000 children found themselves faced with a new online routine. Online platforms suddenly became the new norm.

“If before the pandemic 13-year-olds or older had the opportunity to gradually become acquainted with social media, communication applications or online platforms, the pandemic suddenly exposed even the youngest children to information technology,” the office told BIRN.

Growth in child pornography sites

According to another report, by the National Centre for Safe Internet and the Centre for the Rights of the Child in Albania, there has been an alarming rise in reports of child pornography sites on the Internet.

This report, titled ‘Internet Rapists: The Internet Industry in the Face of Child and Adolescent Protection in Albania’, is based on data obtained from the National Secure Internet Platform, National Helpline for ALO Children 116-111 and the National Centre for Secure Internet in Albania.

“The number of reported sites of child pornography has reached a record 6,273 pages, or 600 times more than a year ago,” the report states.

It said that “40 per cent of the cases of pornographic sites, videos or even images with the same content are with Albanian children, while over 60 per cent of the cases of pornography are with non-Albanian children”.

The 15-17 year-old age group is most affected by cyber incidents, it said.

Cybercrime experts at the Albanian State Police also told BIRN: “There has been a general increase in criminal offenses in the area of ​​cybercrime.”

In August last year, UNICEF Albania published another study, “The lost cases”, noting that between 5,000 and 20,000 referrals are made annually by international partners such as Interpol, Europol and the National Centre for Missing and Exploited Children to the cybercrime department of Albanian police regarding the possession, distribution, production and use of child sexual abuse materials in Albania.

But according to official data of the Ministry of Interior, between 2016 and 2018, only 12 cases were investigated under Article 117 of the Criminal Code, ‘pornography with minors’, and only one case was ended in conviction.

Facebook, Twitter Struggling in Fight against Balkan Content Violations

Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.

Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.

“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.” 

Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.

Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.

But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.

The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.

Among BIRN’s findings are the following:

  • Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
  • One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
  • Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
  • Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
  • The survey showed that people were more ready to report content targeting them or minority groups.

Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints. 

The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.

Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.

Half of the posts containing hate speech remain online


Graphic: BIRN/Igor Vujcic

Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.

In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.

In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.

Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.

The debate over the benefits and the dangers of social media for 21st century society is well known.

In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators. 

There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.

In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.

“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”

Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.

“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.

“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”

In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct. 

The survey asked for the language of the disputed content, who was the target and who was the author, and whether or not the report was successful.

Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence. 

The survey showed that people were more ready to report content targeting them or minority groups.

According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases. 

Roughly half of reports of targeted harassment resulted in removal.

Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous. 

“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”

 Content removal – censorship or corrective?


Graphic: BIRN/Igor Vujcic.

According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules. 

On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.

In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.  

As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.

One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content. 

“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”

Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.

Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.

Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.

“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”

Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.

It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.

BIRN methodology 

BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.

The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.

“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said. 

Social networks often remove content themselves, in what they call a ‘proactive approach’. 

According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.

“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”

“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”

There is no available data, however, when it comes to content in a specific language or country.

Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”

Whatever the tools, the results are sometimes highly questionable.

In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s. 

Facebook determined that Bursac’s post had violated “community standards,” local media reported.

Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.

“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”

Lack of language sensibility

Illustration. Photo: Unsplash/The Average Tech Guy

SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.

“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.   

Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”

“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”

The European Parliament has also stressed the importance of a combined assessment.

“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. 

“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”

Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.

“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.

“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”

Failing in non-English

Illustration. Photo: Unsplash/Ann Ann

Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”

“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”

For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.” 

“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.

The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.

“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.

The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.

“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said. 

“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”

Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked. 

“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.

Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.

 Machine learning

As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines. 

The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.

When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”. 

While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
 
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”

According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.

In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category. 

Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”

“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.

Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.” 

“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN. 

“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now