Pegasus Phone-Hacking Spyware Victims Named in Poland

The University of Toronto’s Citizen Lab, an internet watchdog that has been investigating the use of military-grade spyware from Israeli company NSO Group by authoritarian governments, said on Tuesday that the first two confirmed victims of phone-hacking using the Pegasus software in Poland are prosecutor Ewa Wrzosek and lawyer Roman Giertych.

Pegasus essentially turns infected phones into spying devices, making those who deploy the spyware able to access all data on the target’s phone, including messages and contacts.

The Associated Press, which first reported the new Citizen Lab findings on Tuesday, said that it cannot be confirmed who ordered the targeting of the two Poles.

Both targets have indicated that they suspect the Polish government.

In response to an inquiry from the AP, Polish state security spokesman Stanislaw Zaryn neither confirmed nor denied whether the government ordered the hacks.

Wrzosek is a well-known independent prosecutor who opposes the Polish government’s controversial justice reforms.

She also ordered an investigation into whether the 2020 presidential elections, which were organised during the pandemic, should have been postponed because they were too risky. Two days after she launched the case, she was transferred to a distant provincial town.

Giertych has been acting as lawyer for high-profile opposition politicians, including former Prime Minister Donald Tusk and former Foreign Minister Radoslaw Sikorski.

He also defended an Austrian developer who revealed the involvement of ruling Law and Justice Party leader Jaroslaw Kaczynski in a huge real estate deal to build to skyscrapers in the centre of Warsaw, which caused a major scandal.

Earlier this year, an international investigation by 17 media organisations found that the Hungarian government was among those that acquired the controversial Pegasus software from Israeli surveillance company NSO and used it to target a range of journalists, businessmen and activists.

No targets in Poland or other central European countries were identified at the time, but Citizen Lab warned that it had detected spyware infections in Poland dating back to November 2017.

Turkish Army Uses Algorithm to ‘Persecute’ Gulenists: Report

A new report published by StateWatch, a UK-based international rights organisation monitoring the state and civil liberties in Europe, says an algorithm used to detect alleged government opponents in the Turkish Armed Forces, TSK, has been used to persecute thousands of people.

The report, “Algorithmic persecution in Turkey’s post-coup crackdown: The FETO-Meter system” says more than 20,000 military personnel have been dismissed since a failed coup attempt in 2016 on the basis of algorithms.

“The report shines a flashlight on the (mis)use of algorithms and other information-based systems by the Turkish government in its ruthless counterterrorism crackdown since the July 2016 events. Thousands of people have been put out of work, detained, and persecuted by reference to ‘scores’ assigned to them by a tool of persecution, the so-called FETO-Meter,” Ali Yildiz, one of the authors of the report and a legal expert, told BIRN.

Yildiz added that “this situation is far from being unique to Turkey: in an increasingly connected world where states make wider recourse to counter-terrorism surveillance tools, the possibility of falling victim to algorithmic persecution is high”.

“The report, therefore, serves as a wake-up call to bring more awareness to the devastating effects of algorithmic persecution and oppression not just in Turkey, but also in the entire world,” Yildiz added.

The so-called FETO-Meter is based on 97 main criteria and 290 sub-criteria, many of which violate individual privacy.

The name references alleged supporters of exiled cleric Fethullah Gulen whom the government calls FETO, short for Fethullahist Terrorist Organisation. US-based Gulen has always denied any links to terrorism.

The questions for profiling and scoring individuals include information of their marriages, education, bank accounts, their children’s school records, their promotions and references in the army. The questionnaire demands information about people’s relatives and also neighbours.

It was deployed following the July 2016 coup attempt to root out alleged followers of Gulen who is accused of masterminding the failed coup.

“Hundreds of thousands of people have been profiled and assigned a ‘score’ by the algorithm, which is operated by a special unit called ‘The Office of Judicial Proceedings and Administrative Action’, ATİİİŞ, within the Turkish navy,” Emre Turkut, another author of the report and an expert on international human rights law from Hertie School Berlin, told BIRN.

Turkut said that the report includes testimonies from several high-ranking former military officers who have since sought asylum in the EU, and highlights that application of the algorithm has been arbitrary and underpinned punitive measures not only against primary suspects but anyone in their social circles, including their family members, colleagues, and neighbours.

However, Cihat Yayci, a former navy admiral and the architect of the FETO-Meter algorithm, has defended it.

“FETO militants are very successful in hiding their real identities. The FETO-Meter gave us very successful results for identifying Gulenists,” Cihat Yayci said in a TV interview in 2020.

Since 2016, 292,000 people have been detained and nearly 598,000 people investigated over their alleged links with Gulen.

According to the Turkish defence and interior ministries, nearly 21,000 members of the armed forces, 31,000 police officers, more than 5,500 gendarmerie officers and 509 coastguards have also lost their jobs over alleged links to Gulen.

More than 30,000 people are still in prison because of their alleged ties to the cleric and more than 125,000 public servants have been dismissed.

New North Macedonia Online Sex Abuse Scandal Targets Roma Women

North Maceodnian police on Monday said they were aware of a case in which human rights activists have alerted that explicit photos and videos of Roma girls and women are being posted on a Facebook group, and are “working to apprehending the persons responsible”.

The police said that they are also working on removing the explicit online content, adding that instances of online sexual abuse have increased over the past two years, since the start of the global COVID-19 pandemic.

The case was first reported in the media over the weekend.

“The posts contain private photos and videos of Roma women living in the Republic of North Macedonia but also outside our country,” the Suto Orizari Women’s Initiative, a human rights group from Skopje’s mainly Roma municipality of Shuto Orizari, said on Sunday.

“All the posts on the Facebook page are provoking an avalanche of harassing comments and hate speech from individuals, as well as calls for a public lynching, which violates the dignity of the Roma women whose photos have been published,” it added.

The organisation said the Facebook fan page has been active for some two months, since August 21, and has over 1,600 followers.

Reportedly the Facebook page also contains calls for collecting more photos and videos in order to post them and expose “dishonest women”, along with teases from the page administrators, who ask members whether they would like to see uncensored videos.

The Facebook page was also reported to the authorities last Friday by CIVIL – Center for Freedom, a prominent human rights NGO.

“CIVIL condemns this gruesome act and reported this Facebook page on Friday to the Interior Ministry’s department for computer crime and digital forensics, and to the Public Prosecution, so that they immediately take measures, shut down this page, and uncover and accordingly punish the administrators,” CIVIL said on Sunday.

The recent case was reminiscent of the earlier so-called “Public Room” affair. A group known as Public Room, with some 7,000 users, shared explicit pornographic content on the social network Telegram. It was first shut down in January 2020 after a public outcry, only to re-emerge a year later, before it was closed again on January 29.

The group shared explicit content of often under-age girls.

In April this year, the prosecution charged two persons, a creator and an administrator of Public Room, with producing and distributing child pornography.

These occurrences sparked a wide debate, as well as criticism of the authorities for being too slow to react to such cases and curb online harassment and violence against women and girls.

This prompted authorities to earlier this year to promise changes to the penal code, to precise that the crime of “stalking”, which is punishable with up to three years in jail, can also involve abuse of victims’ personal data online.

North Macedonia’s Interior Ministry has meanwhile launched a campaign to raise awareness about privacy protection and against online sexual misuse, called “Say no”.

Montenegro Data Protection Agency Voices Concern Over COVID-19 Measures

A member of Montenegro’s Council of the Agency for Personal Data Protection, Muhamed Gjokaj, on Wednesday warned that new COVID-19 measures could put citizens’ personal data at risk.

He said he feared unauthorized persons could get insight into citizens’ personal data, and called on the Health Ministry to be more precise about its new health measures.

“The Health Ministry should explain on the basis of which specific legal norms it has prescribed that waiters have the right to process the personal data of citizens who enter a café or restaurant.

“If there is no adequate legal basis, citizens can sue all those entities that ask to inspect their personal data, which also relates to health information,” Gjokaj told the daily Pobjeda.

On July 30, the Health Ministry announced that patrons of nightclubs, discotheque and indoor restaurants must show their ID and National COVID-19 certificate before entering.

The national COVID-19 certificate is a document issued by Health Ministry, which proves that a person has been vaccinated, or has had a recent negative PCR test, or has recovered from COVID-19. According to the Health Ministry, the certificate must be showed to the waiter or club staff.

Montenegro’s Personal Data Protection Law specifies that personal data related to health conditions can be inspected only by medical personnel, however. It prohibits inspection of personal data by unauthorized personnel.

On July 30, the head of the Digital Health Directorate, Aleksandar Sekulic, said no violation of citizens’ personal data was taking place under the measures, as only the name and date of birth of the person were on the COVID-19 certificate.

“We do not provide medical conditions through the certificates but only the data citizens want to provide. They voluntarily agreed to provide a certain amount of data,” Sekulic told a press conference.

On August 3, a lawyer, Andrijana Razic claimed the Health Ministry had violated the law by the new health measures, accusing it of forcing citizens to be vaccinated. She said that non-vaccinated citizens must not be discriminated against in any way.

“It’s completely clear that employees in a restaurant or nightclub have absolutely no right to identify citizens, or ask them for health information that is secret by law. The government should seriously consider the possible consequences of pursuing such a discriminatory and dangerous health policy, based on a drastic violation of basic human rights,” Razic told the daily newspaper Dan.

According to the Institute for Public Health, there are 1,667 registered COVID-19 active cases in the country. The capital Podgorica and the coastal town of Budva have the largest numbers. On Wednesday, the Health Ministry said that 34.5 per cent of the adult population had been vaccinated against COVID-19.

Romanian Intelligence: Hospitals Need ‘Urgent’ Protection from Cyber-Attacks

Days after authorities announced that the Witting public hospital in Bucharest had been targeted by hackers, the Romanian Information Service, SRI, has called on the government to take “urgent” action to protect state-owned medical institutions from these disruptive threats.

Romania’s national intelligence service has warned of widespread deficiencies when it comes to cybersecurity in hospitals, in spite of their increasing reliance on informatics and online systems to run their daily operations.

“Such attacks against some hospitals in Romania represent a sign of alarm about the low level of cybersecurity that exists,” the agency’s statement issued on Friday said, stressing “the need to adopt centralized decisions” that make it mandatory for all medical institutions to impose “minimal cybersecurity measures”.

The intelligence service has briefed the ministries of Health and Transport and Infrastructure concerning the “way in which the attack [reported this month against the Witting hospital] was conducted”, warning the two ministries about the “vulnerabilities of which attackers took advantage”, the SRI statement on Friday said. 

The secret service also presented both departments with a “series of measures to be implemented on urgent basis, in order to limit the effects generated of the attack as well as to prevent future ransomware attacks.

“Although they are of a medium or reduced complexity, this kind of ransomware attacks can generate major dysfunctions in the activities carried out by medical field’s institutions,” the SRI statement explained.

In the absence of clear general standards, the level of cybersecurity in public hospitals and most Romanian state institutions largely depends on the competence and awareness of the personnel in charge, specialists told BIRN.

On 22 July this year, the SRI said the servers of the Witting hospital in Bucharest were targeted by a cyberattack conducted with a ransomware application known as PHOBOS.

“After encrypting the data, the attackers demanded that a ransom be paid for them to decrypt them again,” the intelligence service said at the time.

The attack did not affect the functioning of the hospital, which assured the continuity of operations using data from offline registries. According to the SRI, no ransom was paid to the hackers.

The intelligence service said the attack resembles others that targeted four Romanian hospitals in the summer of 2019. The systems of the four hospitals were not protected by antivirus and were also compromised using PHOBOS.

Southeast Europe Civil Society Must Cooperate to Combat Digital Violations

Digital rights violations have been rising across Southeastern Europe since the beginning of the COVID-19 pandemic with a similar pattern – pro-government trolls and media threatening freedom of expression and attacking journalists who report such violations.

“Working together is the only way to raise awareness of citizens’ digital rights and hold public officials accountable,” civil society representatives attending BIRN and Share Foundation’s online event on Thursday agreed.

The event took place after the release of BIRN and SHARE Foundation’s report, Digital Rights Falter amid Political and Social Unrest, published the same day.

“We need to build an alliance of coalitions to raise awareness on digital rights and the accountability of politicians,” said Blerjana Bino, from SCiDEV, an Albanian-based NGO, closely following this issue.

When it comes to prevention and the possibilities of improving digital competencies in order to reduce risks about personal data and security, speakers agreed that digital and informational literacy is important – but the blame should not be only put on users.

The responsibility of tech giants and relevant state institutions to investigate such cases must be kept in mind, not just regular cases but also those that are more complicated, the panel concluded.

Uros Misljenovic, from Partners Serbia, sees a major part of the problem in the lack of response from the authorities.

“We haven’t had one major case reaching an epilogue in court. Not a single criminal charge was brought by the public prosecutor either. Basically, the police and prosecutors are not interested in prosecuting these crimes,” he said. “So, if you violate these rights, you will face no consequences,” he concluded.

The report was presented and discussed at an online panel discussion with policymakers, journalists and civil society members around digital rights in Southeast Europe.

It was the first in a series of events as part of Platform B – a platform that aims to amplify the voices of strong and credible individuals and organisations in the region that promote the core values of democracy, such as civic engagement, independent institutions, transparency and rule of law.

Between August 2019 and December 2020, BIRN and the SHARE Foundation verified more than 800 violations of digital rights, including attempts to prevent valid freedom of speech (trolling of media and the public engaged in fair reporting and comment, for example) and at the other end of the scale, efforts to overwhelm users with false information and racist/discriminatory content – usually for financial or political gain.

The lack of awareness of digital rights violations within society has further undermined democracy, not only in times of crisis, the report reads, and identifiers common trends, such as:

  • Democratic elections being undermined
  • Public service websites being hacked
  • Provocation and exploitation of social unrest
  • Conspiracy theories and fake news
  • Online hatred, leaving vulnerable people more isolated
  • Tech shortcuts failing to solve complex societal problems.

The report, Digital Rights Falter amid Political and Social Unrest, can be downloaded here.

Glitched Online Registration System for COVID-19 Vaccination Confuses Croatia

As more doses of COVID-19 vaccines finally arrive in Croatia, problems continue when it comes to registration, especially through the national online platform, CijepiSe [Get vaccinated].

“I expected the CijepiSe platform to work because the pandemic has lasted such a long time,“ Mia Biberovic, executive editor at the Croatian tech website Netokracija, told BIRN.

“I assumed the preparations were done early enough,“ she said, concluding that alas, this was not the case. As a consequence, she noted, a small number of people who applied online for a jab are being invited to get vaccinated.

For days, media have reported on problems with the platform, which cost 4.4 million kuna, or about 572,000 euros. On Friday, media reported that the data of the first 4,000 people who applied for vaccination via the platform during its test phase in February had been deleted.

The health ministry then denied reports about the deletion, and said the data relevant for making vaccination appointments had not connected in the case of 200 citizens who booked vaccinations during the test trial.

“The problem is, first, that the test version came when it [the system] was not functional yet. Second, [in the test phase] there were no remarks about the protection of users’ data, i.e. how the user data left there would be used,” Biberovic, who was also among those who applied during the test trial, noted.

“As far as I understood, the data was not deleted but could not be seen anywhere because it was incomplete … So they are not deleted, but again, they are not usable, which is even more bizarre,“ Biberovic added. “This is certainly a risk because citizens do not know how their data is being used.”

Vaccination appointments in Croatia can be ordered through the CijepiSe online platform, a call centre or via general practitioners, and all those who apply should be put on a single list. However, direct contact with a doctor has turned out to be the best way to get a vaccination appointment.

The ministry on Saturday said 198,274 citizens have been registered via the CijepiSe platform, of whom 45,416 have been vaccinated. But around 40,000 of these were not invited through the platform but by direct invitation of general practitioners.

Zvonimir Sostar, head of the Zagreb-based Andrija Stampar Teaching Institute of Public Health, stated on Saturday that the platform was not functioning in the capital, and that they would change the vaccination registration system, advising citizens to register via general practitioners.

Shortly after, the ministry promised that “everyone registered in the CijepiSe system will receive their vaccination appointment”.

“Maybe the platform is not functioning the way we wanted, but it functions well enough to cope with the challenges of vaccination. I read in the papers that the system of vaccination has collapsed. That’s not true! We are increasing the daily number of vaccinations,” Health Minister Vili Beros said on Sunday.

However, the Conflict of Interest Commission, an independent state body tasked with preventing conflicts of interest between private and public interests in the public sector, confirmed on Tuesday that it has opened a case against Beros. It comes after the media reported that the minister has ties to the company that designed the CijepiSe platform. The minister denies any wrongdoing.

Albania Prosecutors Investigate Socialists’ Big-Brother-Style Database

Albania’s Special Structure Against Corruption and Organized Crime has summoned Andi Bushati and Armand Shkullaku, owners and editors of Lapsi.al news website, for questioning about a database, purportedly created by the Socialist Party, which contains the names of 910,000 voters in the Tirana region, along with personal data, including employment and family background records in what critics call a massive tracking system.

Bushati said prosecutors asked him where the information came from, and said he had refused to reveal his source, calling the meeting “a short meeting without much substance” while suggesting that the prosecutors should instead investigate how the personal data of the citizens ended up in the hands of a political party.

The prosecutors have not inspected any party office or commented publicly on what they are investigating.

The news about the database revealed last Sunday sent waves across the political spectrum and the population.

Ruling Socialist Party officials acknowledge that the database exists, but insist the data was provided voluntarily by citizens. They have also claimed that the published excerpts are not theirs.

Socialist parliamentary Group Taulant Balla immediately called the news “Lies!”

“The Socialist Party has built its database over years in door-to-door communication with the people,” he added. Days later, he claimed that the database published was not the one belonging to the Socialist Party.

Edi Rama, the Prime Minister, has acknowledged that his party has a “system of patronage” of voters but said their database is more complex and that the one leaked is probably an old one. Other Socialists have denied that the leaked database is theirs at all.

The opposition Democratic Party claims the data included in the database was stolen by the Socialist Party via the government service website E-Albania, where people apply for different services.

Many citizens who have had access to the database claim the data there are those they supplied to state institutions, and say the database seems well updated.

This E-Albania website was used by the government of Prime Minister Rama to issue permission to go outside during the national lockdown in spring 2020. In their forms, citizens had to provide phone numbers and email addresses.

The database, which BIRN had seen, contains some 910,000 entries of names, addresses, birthday, personal ID cards, employment and other data.

For each voter, a party official known as “patronazhist” a word derived from French patronage, is assigned. If they want to know where somebody works, a search in the database can provide that information.

For each voter, there is data on how they voted in the past and what their likely preference is today. In a separate column titled “comments”, party officials write notes on voters.

In one, a party official notes that “the voter requested employment of his wife” while in another, “the voter didn’t thank [the party] for obtaining his house deeds].

Property issues are widespread in Albania and various governments have been criticized for handing over ownership titles as electoral campaign bribes. The issue of such deeds in elections is currently forbidden.

In several cases, officials noted that some voters do not participate in elections because they are “Jehovah Witnesses,” or “extremist Muslims who are not permitted by religion to vote”.

In one case, the comment indicates that voters’ social media pages are checked by officials: “By investigating his Facebook profile, we can conclude he votes for SP,” a note reads while in another case it reads: “This one has previously voted for PDIU party; should be kept under monitoring.”

A note for a voter identified as business owner reads: “We should contact him for his employees”. In another case: The mother of the voter is employed in the municipality”.

Even family conflicts do not escape the observing eye of the party: “Xxx is relative of xxx but they are not on speaking terms,” a note reads.

The Albanian Helsinki Committee, a rights group based in Tirana, underlined that systematic monitoring of voters by a political party may violate the secrecy of the ballot and is especially concerning if done without a voter’s consent.

On Friday, 12 rights organisations called on the authorities to investigate the matter after indicating that at least the law on the Protection of Personal Data had been violated.

“This case is the illegal collection, elaboration and distribution of personal data of some 1 million citizens without their consent,” the statement reads.

While scores of citizens are interested to know which Socialist Party official is tracking them, Big Brother Albanian style apparently does not lack a note of comedy.

In the database, Socialist Party head and PM Rama, is shown as a voter who works at the Councils of Ministers and is under the “patronage” of Elvis Husha, a party official. Husha is under patronage of another party official.

Journalist Andi Bushati, who first exposed the database, said chances are slim that the prosecutors will do their work. “I don’t really believe that the prosecutors will find the truth of this. When a crime appears, it remains without author,” he commented.

Cyber-Attacks a Growing Threat to Unprepared Balkan States

It wasn’t voting irregularities or the counting of postal ballots that delayed the results of last year’s parliamentary election in North Macedonia, but an audacious denial-of-service, DDoS, attack on the website of the country’s election commission.

Eight months on, however, the perpetrator or perpetrators behind the most serious cyber attack in the history of North Macedonia have still to be identified, let alone brought to justice.

While it’s not unusual for hackers to evade justice, last year’s Election Day attack is far from the only case in North Macedonia still waiting to be solved.

“Although some steps have been taken in the meantime to improve the situation, it’s still not enough,” Eurothink, a Skopje-based think-tank that focuses on foreign and security policy, told BIRN in a statement.

“The low rate of solved cyber-crime cases is another indicator of the low level of readiness to solve cyber-attacks, even in cases of relatively ‘less sophisticated’ and ‘domestic’ cyber threats.”

Across the Balkans, states like North Macedonia have put down on paper plans to tackle the threat from cyber terrorism, but the rate of attacks in recent years – coupled with the fact many remain unresolved – point to serious deficiencies in practice, experts say. Alarmingly, Bosnia and Hercegovina does not even have a comprehensive, state-level cyber security strategy.

“I am convinced that all countries [in the region] are vulnerable,” said Ergest Nako, an Albanian technology and ecosystems expert. “If an attack is sophisticated, they will hardly be able to protect themselves.”

In the case of Albania, Nako told BIRN, “the majority of targets lack the proper means to discover and react to cyber-attacks.”

“With the growing number of companies and state bodies developing digital services, we will witness an increasing number of attacks in the future.”

Ransomware a ‘growing threat’ to Balkan states


Illustration. Photo: Unsplash/Dimitri Karastelev

The COVID-19 pandemic has underscored the threat from cyber-attacks and the impact on lives.

According to the 2021 Threat Report from security software supplier Blackberry, hospitals and healthcare providers were of “primary interest” to cyber criminals waging ransomware attacks while there were attacks too on organisations developing vaccines against the novel coronavirus and those involved in their transportation.

Skopje-based cyber security engineer Milan Popov said ransomware – a type of malware that encrypts the user’s files and demands a ransom in order access – is a growing danger to Balkan states too.

“Bearing in mind the state of cyber security in the Western Balkans, I would say that this is also a growing threat for these countries as well,” Popov told BIRN. “While there haven’t been any massive ransomware attacks in the region, there have been individual cases where people have downloaded this type of malware on their computers, and ransoms were demanded by the various attackers.”

A year ago, hackers targeted the public administration of the northern Serbian city of Novi Sad, blocking a data system and demanding some 400,000 euros to stop.

“We’re not paying the ransom,” Novi Sad Milos Vucevic said at the time. “I don’t even know how to pay it, how to justify the cost in the budget. It is not realistic to pay that. Nobody can blackmail Novi Sad,” he told Serbia’s public broadcaster.

A local company announced the following that it had “eliminated the consequences” of the attack.

In Serbia, cyber security is regulated by the Law on Information Security and the 2017 Strategy for the Development of Information Security, but Danilo Krivokapic of digital rights organisation Share Foundation said that implementation of the legal framework remained a problem.

“The question is – to what extent our state bodies, which are covered by this legal norm, are ready to implement such measures?” Krivokapic told BIRN. “They must adopt [their own] security act; they need to undertake measures to protect the information system.”

Political battles waged in cyber space


Illustration. Photo: Unsplash/Stephen Phillips

North Macedonia was the target of a string of cyber attacks last year, some attributed to a spillover of political disputes into cyber space.

In May 2020, a Greek hacker group called ‘Powerful Greek Army’ hacked dozens of e-mail addresses and passwords of employees in North Macedonia’s finance and economy ministry and the municipality of the eastern town of Strumica.

The two countries have been at odds for decades over issues of history and identity, and while a political agreement was reached in 2018 tensions remain. Similar issues dog relations between North Macedonia and its eastern neighbour Bulgaria, too.

“Cyber-attacks can happen when a country has a political conflict, such as the current one with Bulgaria or previous one with Greece, but they are very rare,” said Suad Seferi, a cyber security analyst and head of the Informational Technologies Sector at the International Balkan University in Skopje.

“However, whenever an international conflict happens, cyber-attacks on the country’s institutions follow.”

Bosnia without state-level strategy


Illustration. Photo: Naipo de CEE

In Bosnia, the state-level Security Ministry was tasked in 2017 with adopting a cyber security strategy but, four years on, has yet to do so.

“Although some strategies at various levels in Bosnia are partially dealing with the cyber security issue, Bosnia remains the only South Eastern European country without a comprehensive cyber security strategy at the state level,” the Sarajevo office of the Organisation for Security and Cooperation in Europe, OSCE, told BIRN.

It also lacks an operational network Computer Emergency Response Teams (CERTs) with sufficient coverage across the country, the mission said.

The Security Ministry says it has been unable to adopt a comprehensive strategy because of the non-conformity of bylaws, but that the issue will be included in the country’s 2021-2025 Strategy for Preventing and Countering Terrorism.

So far, only the guidelines of a cyber security strategy have been adopted, with the help of the OSCE.

Predrag Puharic, Chief Information Security Officer at the Faculty for Criminalistics, Criminology and Security Studies in Sarajevo, said the delay meant Bosnia was wide open to cyber attacks, the danger of which he said would only grow.

“I think that Bosnia and Herzegovina has not set up the adequate mechanisms for prevention and reaction to even remotely serious attacks against state institutions or the citizens themselves,” Puharic told BIRN.

The country’s defence ministry has its own cyber security strategy, but told BIRN it would easier “if there were a cyber-security strategy at the state level and certain security measures, such as CERT”.

‘Entire systems jeopardised’


A laptop screen displays a message after it was infected with ransomware during a worldwide cyberattack. Photo: EPA/ROB ENGELAAR

Strengthening cybersecurity capacities was a requirement of Montenegro when it was in the process of joining NATO in 2019, prompting the creation of the Security Operations Centre, SOC.

According to the country’s defence ministry, protection systems have detected and prevented over 7,600 ‘non-targeted’ malware threats – not targeted at any particular organisation – and more than 50 attempted ‘phishing’ attacks over the past two years.

“In the previous five years several highly sophisticated cyber threats were registered,” the ministry told BIRN. “Those threats came from well-organised and sponsored hacker groups.”

Previous reports have identified a scarcity of cyber experts in the country as an obstacle to an effective defence. Adis Balota, a professor at the Faculty of Information Technologies in Podgorica, commended the strategies developed by the state, but said cyber terrorism remained a real threat regardless.

“Cyber-attacks of various profiles have demonstrated that they can jeopardise the functioning of entire systems,” Balota said. “The question is whether terrorists can do the same because they are using cyberspace to recruit, spread propaganda and organise their activities.”

This publication was produced with the financial support of the European Union. Its content is the sole responsibility of BIRN and does not necessarily reflect the views of the European Union nor of Hedayah.

Facebook, Twitter Struggling in Fight against Balkan Content Violations

Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.

Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.

“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.” 

Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.

Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.

But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.

The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.

Among BIRN’s findings are the following:

  • Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
  • One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
  • Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
  • Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
  • The survey showed that people were more ready to report content targeting them or minority groups.

Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints. 

The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.

Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.

Half of the posts containing hate speech remain online


Graphic: BIRN/Igor Vujcic

Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.

In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.

In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.

Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.

The debate over the benefits and the dangers of social media for 21st century society is well known.

In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators. 

There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.

In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.

“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”

Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.

“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.

“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”

In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct. 

The survey asked for the language of the disputed content, who was the target and who was the author, and whether or not the report was successful.

Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence. 

The survey showed that people were more ready to report content targeting them or minority groups.

According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases. 

Roughly half of reports of targeted harassment resulted in removal.

Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous. 

“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”

 Content removal – censorship or corrective?


Graphic: BIRN/Igor Vujcic.

According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules. 

On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.

In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.  

As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.

One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content. 

“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”

Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.

Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.

Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.

“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”

Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.

It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.

BIRN methodology 

BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.

The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.

“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said. 

Social networks often remove content themselves, in what they call a ‘proactive approach’. 

According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.

“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”

“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”

There is no available data, however, when it comes to content in a specific language or country.

Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”

Whatever the tools, the results are sometimes highly questionable.

In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s. 

Facebook determined that Bursac’s post had violated “community standards,” local media reported.

Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.

“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”

Lack of language sensibility

Illustration. Photo: Unsplash/The Average Tech Guy

SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.

“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.   

Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”

“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”

The European Parliament has also stressed the importance of a combined assessment.

“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. 

“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”

Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.

“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.

“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”

Failing in non-English

Illustration. Photo: Unsplash/Ann Ann

Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”

“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”

For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.” 

“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.

The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.

“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.

The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.

“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said. 

“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”

Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked. 

“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.

Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.

 Machine learning

As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines. 

The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.

When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”. 

While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
 
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”

According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.

In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category. 

Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”

“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.

Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.” 

“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN. 

“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”

BIRD Community

Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?

We created BIRD Community, a place where you can have it all!

Join Now