A well known group of supposedly Greek-based hackers, calling themselves “Powerful Greek Army”, has claimed it took down the pages of several banks in North Macedonia on Tuesday evening for a couple of hours.
Only one bank, however, the private TTK Bank, has confirmed that its web page was in fact the target of a hacker attack, saying that it “successfully prevented” the attack and “there are no consequences”.
“Powerful Greek Army” posted on Monday that it intended to attack a range oif banks.
“ALL banks licensed by the National Bank of the Republic of North Macedonia/All Banks of North Macedonia will be downed … soon,” the group wrote on Twitter. On Tuesday, the group posted subsequent posts, claiming success in this.
BIRN asked North Macedonia’s central bank to comment but did not receive an answer by the time of publication.
This is not the first time the group has targeted North Macedonia’s institutions.
In February, the Education Ministry confirmed it came under attack by the group, which posted video footage of allegedly hacked video surveillance cameras from inside the ministry. However, the ministry said the camera footage was fake.
Earlier, in May 2020, “Powerful Greek Army” leaked dozens of email addresses and passwords from staffers in North Macedonia’s Ministry of Economy and Finance, as well as from the municipality of Strumica – and bragged about its exploits on Twitter.
The hacking group was reportedly founded in 2016, when it took down the website of the Greek Prime Minister. Since then it has taken offline a number of banks in Turkey and downed the websites of Turkish Airlines and the office of the Turkish president among other targets. In a recent interview, an alleged member said they had not particular motivation or ideology and chose their targets at random, from Greece and its neighbours to Nigeria and Azerbaijan.
North Macedonia’s Education Ministry on Sunday said it had been a target of a hacking attack over the past few days, but said video footage published on the Twitter account of a hacker group called “Powerful Greek Army”, as proof of the hacking, was fake.
The video footage, that seems to be taken from a camera surveillance system, “was not taken by or within the ministry because the ministry does not have such a system”, it said.
The ministry did not yet disclose whether it suffered damage from the attack, or whether any documentation had been lost or hijacked.
“Powerful Greek Army” published the short video on Twitter on Friday last week, writing that it had hacked the Education Ministry of the neighbouring country. “We have access even in their camera systems, we watch you 24/7, we have eyes everywhere, Skopje,” the group twitted.
This post caught attention in North Macedonia over the weekend.
It was far an isolated incident in the country. After several attacks on state institutions over the past few years, experts have warned that the country’s IT system is particularly vulnerable to cyber-crime, and is in dire need of security improvements.
The Greek hacking group behind ther latest post is also not unknown to the public in North Macedonia.
In May 2020, “Powerfull Greek Army” leaked dozens of email addresses and passwords from staffers in North Macedonia’s Ministry of Economy and Finance, as well as from the municipality of Strumica – and bragged about its exploits on Twitter.
North Maceodnian police on Monday said they were aware of a case in which human rights activists have alerted that explicit photos and videos of Roma girls and women are being posted on a Facebook group, and are “working to apprehending the persons responsible”.
The police said that they are also working on removing the explicit online content, adding that instances of online sexual abuse have increased over the past two years, since the start of the global COVID-19 pandemic.
The case was first reported in the media over the weekend.
“The posts contain private photos and videos of Roma women living in the Republic of North Macedonia but also outside our country,” the Suto Orizari Women’s Initiative, a human rights group from Skopje’s mainly Roma municipality of Shuto Orizari, said on Sunday.
“All the posts on the Facebook page are provoking an avalanche of harassing comments and hate speech from individuals, as well as calls for a public lynching, which violates the dignity of the Roma women whose photos have been published,” it added.
The organisation said the Facebook fan page has been active for some two months, since August 21, and has over 1,600 followers.
Reportedly the Facebook page also contains calls for collecting more photos and videos in order to post them and expose “dishonest women”, along with teases from the page administrators, who ask members whether they would like to see uncensored videos.
The Facebook page was also reported to the authorities last Friday by CIVIL – Center for Freedom, a prominent human rights NGO.
“CIVIL condemns this gruesome act and reported this Facebook page on Friday to the Interior Ministry’s department for computer crime and digital forensics, and to the Public Prosecution, so that they immediately take measures, shut down this page, and uncover and accordingly punish the administrators,” CIVIL said on Sunday.
The recent case was reminiscent of the earlier so-called “Public Room” affair. A group known as Public Room, with some 7,000 users, shared explicit pornographic content on the social network Telegram. It was first shut down in January 2020 after a public outcry, only to re-emerge a year later, before it was closed again on January 29.
The group shared explicit content of often under-age girls.
In April this year, the prosecution charged two persons, a creator and an administrator of Public Room, with producing and distributing child pornography.
These occurrences sparked a wide debate, as well as criticism of the authorities for being too slow to react to such cases and curb online harassment and violence against women and girls.
This prompted authorities to earlier this year to promise changes to the penal code, to precise that the crime of “stalking”, which is punishable with up to three years in jail, can also involve abuse of victims’ personal data online.
North Macedonia’s Interior Ministry has meanwhile launched a campaign to raise awareness about privacy protection and against online sexual misuse, called “Say no”.
Newly envisaged penalties for assaulting a journalist or a media worker adopted by North Macedonia’s government on Tuesday will be from three months to three years in jail, the same as for assaulting a police officer, the Justice Ministry said.
“After adoption by the government, we will immediately process these changes to parliament. I expect parliament to pass these changes right after the summer break”, meaning early autumn, Justice Minister Bojan Maricic said.
The minister said the changes mean in practice that authorities will treat cases where journalists are prevented from doing their job or are attacked the same way as they treat assaults on police officers. Accordingly, the prosecution will process these cases ex officio.
Another change the minister announced is the planned reduction of defamation fines for journalists, editors and media outlets through amendments to the Law on Civil Responsibility.
“The defamation fines for journalists and editors will be five times lower, and for media outlets they will be three times lower [than before],” Maricic wrote.
If these changes pass, a journalist who loses a civil court case for defamation will pay a maximum fine of 400 euros instead of the current maximum of 2,000 euros, which is in many cases equal to or more than four average monthly salaries for a journalist.
For editors, the maximum fine will decrease from 10,000 euros to 2,000, and for the media outlets, the sum should fall from the current maximum of 15,000 to 5,000 euros.
The third announced change that affects journalists is the planned introduction of the criminal offence of stalking. This will envisage fines or jail sentences for stalkers who not only physically endanger or threaten their victims but also do that online.
The maximum sentence for this offence will be three years in jail.
A new study, “Media Pluralism Monitor 2021”, published by the Centre for Media Pluralism and Media Freedom at the European University Institute earlier this month, states that some things have improved for the media in North Macedonia compared to 2016, the last year in power of the former authoritarian PM Nikola Gruevski, who was ousted in 2017.
The report notes that media freedoms in North Macedonia during 2020 were broader, and that journalists and their associations are no longer exposed to serious physical attacks and pressures.
The ministry said the changes are being made not only to increase the security of the journalists but also to prevent online stalking and abuse of private data. The recent so-called Telegram scandal revealed the recurring existence of a Telegram group sharing explicit pictures and videos of women and girls.
Partners Serbia, a Belgrade-based NGO that works on initiatives to combat corruption and develop democracy and the rule of the law in the Balkan country, had been on Twitter for more than nine years when, in November 2020, the social media giant suspended its account.
Twitter gave no notice or explanation of the suspension, but Ana Toskic Cvetinovic, the executive director of Partners Serbia, had a hunch – that it was the result of a “coordinated attack”, probably other Twitter users submitting complaints about how the NGO was using its account.
“We tried for days to get at least some information from Twitter, like what could be the cause and how to solve the problem, but we haven’t received any answer,” Toskic Cvetinovic told BIRN. “After a month of silence, we saw that a new account was the only option.”
Twitter lifted the suspension in January, again without explanation. But Partners Serbia is far from alone among NGOs, media organisations and public figures in the Balkans who have had their social media accounts suspended without proper explanation or sometimes any explanation at all, according to BIRN monitoring of digital rights and freedom violations in the region.
Experts say the lack of transparency is a significant problem for those using social media as a vital channel of communication, not least because they are left in the dark as to what can be done to prevent such suspensions in the future.
But while organisations like Partners Serbia can face arbitrary suspension, half of the posts on Facebook and Twitter that are reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian remain online, according to the results of a BIRN survey, despite confirmation from the companies that the posts violated rules.
The investigation shows that the tools used by social media giants to protect their community guidelines are failing: posts and accounts that violate the rules often remain available even when breaches are acknowledged, while others that remain within those rules can be suspended without any clear reason.
Among BIRN’s findings are the following:
Almost half of reports in Bosnian, Serbian, Montenegrin or Macedonian language to Facebook and Twitter are about hate speech
One in two posts reported as hate speech, threatening violence or harassment in Bosnian, Serbian, Montenegrin or Macedonian language, remains online. When it comes to reports of threatening violence, the content was removed in 60 per cent of cases, and 50 per cent in cases of targeted harassment.
Facebook and Twitter are using a hybrid model, a combination of artificial intelligence and human assessment in reviewing such reports, but declined to reveal how many of them are actually reviewed by a person proficient in Bosnian, Serbian, Montenegrin or Macedonian
Both social networks adopt a “proactive approach”, which means they remove content or suspend accounts even without a report of suspicious conduct, but the criteria employed is unclear and transparency lacking.
The survey showed that people were more ready to report content targeting them or minority groups.
Experts say the biggest problem could be the lack of transparency in how social media companies assess complaints.
The assessment itself is done in the first instance by an algorithm and, if necessary, a human gets involved later. But BIRN’s research shows that things get messy when it comes to the languages of the Balkans, precisely because of the specificity of language and context.
Distinguishing harsh criticism from defamation or radical political opinions from expressions of hatred and racism or incitement to violence require contextual and nuanced analysis.
Half of the posts containing hate speech remain online
Graphic: BIRN/Igor Vujcic
Facebook and Twitter are among the most popular social networks in the Balkans. The scope of their popularity is demonstrated in a 2020 report by DataReportal, an online platform that analyses how the world uses the Internet.
In January, there were around 3.7 million social media users in Serbia, 1.1 million in North Macedonia, 390,000 in Montenegro and 1.7 million in Bosnia and Herzegovina.
In each of the countries, Facebook is the most popular, with an estimated three million users in Serbia, 970,000 in North Macedonia, 300,000 in Montenegro and 1.4 million in Bosnia and Herzegovina.
Such numbers make Balkan countries attractive for advertising but also for the spread of political messages, opening the door to violations.
The debate over the benefits and the dangers of social media for 21st century society is well known.
In terms of violent content, besides the use of Artificial Intelligence, or AI, social media giants are trying to give users the means to react as well, chiefly by reporting violations to network administrators.
There are three kinds of filters – manual filtering by humans; automated filtering by algorithmic tools and hybrid filtering, performed by a combination of humans and automated tools.
In cases of uncertainty, posts or accounts are submitted to human review before decisions are taken, or after in the event a user complaints about automated removal.
“Today, we primarily rely on AI for the detection of violating content on Facebook and Instagram, and in some cases to take action on the content automatically as well,” a Facebook spokesperson told BIRN. “We utilize content reviewers for reviewing and labelling specific content, particularly when technology is less effective at making sense of context, intent or motivation.”
Twitter told BIRN that it is increasing the use of machine learning and automation to enforce the rules.
“Today, by using technology, more than 50 per cent of abusive content that’s enforced on our service is surfaced proactively for human review instead of relying on reports from people using Twitter,” said a company spokesperson.
“We have strong and dedicated teams of specialists who provide 24/7 global coverage in multiple different languages, and we are building more capacity to address increasingly complex issues.”
In order to check how effective those mechanisms are when it comes to content in Balkan languages, BIRN conducted a survey focusing on Facebook and Twitter reports and divided into three categories: violent threats (direct or indirect), harassment and hateful conduct.
The survey asked for the language of the disputedcontent, who was the target and who was the author, and whether or not the report was successful.
Over 48 per cent of respondents reported hate speech, some 20 per cent reported targeted harassment and some 17 per cent reported threatening violence.
The survey showed that people were more ready to report content targeting them or minority groups.
According to the survey, 43 per cent of content reported as hate speech remained online, while 57 per cent was removed. When it comes to reports of threatening violence, content was removed in 60 per cent of cases.
Roughly half of reports of targeted harassment resulted in removal.
Chloe Berthelemy, a policy advisor at European Digital Rights, EDRi, which works to promote digital rights, says the real-life consequences of neglect can be disastrous.
“For example, in cases of image-based sexual abuse [often wrongly called “revenge porn”], the majority of victims are women and they suffer from social exclusion as a result of these attacks,” Berthelemy said in a written response to BIRN. “For example, they can be discriminated against on the job market because recruiters search their online reputation.”
Content removal – censorship or corrective?
Graphic: BIRN/Igor Vujcic.
According to the responses to BIRN’s questionnaire, some 57 per cent of those who reported hate speech said they were notified that the reported post/account violated the rules.
On the other hand, some 28 per cent said they had received notification that the content they reported did not violate the rules, while 14 per cent received only confirmation that their report was filed.
In terms of reports of targeted harassment, half of people said they received confirmation that the content violated the rules; 16 per cent were told the content did not violate rules. A third of those who reported targeted harassment only received confirmation their report was received.
As for threatening violence, 40 per cent of people received confirmation that the reported post/account violated the rules while 60 per cent received only confirmation their complaint had been received.
One of the respondents told BIRN they had reported at least seven accounts for spreading hatred and violent content.
“I do not engage actively on such reports nor do I keep looking and searching them. However, when I do come across one of these hateful, genocide deniers and genocide supporters, it feels the right thing to do, to stop such content from going further,” the respondent said, speaking on condition of anonymity. “Maybe one of all the reported individuals stops and asks themselves what led to this and simply opens up discussions, with themselves or their circles.”
Although for those seven acounts Twitter confirmed they violate some of the rules, six of them are still available online.
Another issue that emerged is unclear criteria while reporting violations. Basic knowledge of English is also required.
Sanjana Hattotuwa, special advisor at ICT4Peace Foundation agreed that the in-app or web-based reporting process is confusing.
“Moreover, it is often in English even though the rest of the UI/UX [User Interface/User Experience] could be in the local language. Furthermore, the laborious selection of categories is, for a victim, not easy – especially under duress.”
Facebook told BIRN that the vast majority of reports are reviewed within 24 hours and that the company uses community reporting, human review and automation.
It refused, however, to give any specifics on those it employs to review content or reports in Balkan languages, saying “it isn’t accurate to only give the number of content reviewers”.
BIRN methodology
BIRN conducted its questionnaire via the network’s tool for engaging citizens in reporting, developed in cooperation with the British Council.
The anonymous questionnaire had the aim of collecting information on what type of violations people reported, who was the target and how successful the report was. The questions were available in English, Macedonian, Albanian and Bosnian/Serbian/Montenegrin. BIRN focused on Facebook and Twitter given their popularity in the Balkans and the sensitivity of shared content, which is mostly textual and harder to assess compared to videos and photos.
“That alone doesn’t reflect the number of people working on a content review for a particular country at any given time,” the spokesperson said.
Social networks often remove content themselves, in what they call a ‘proactive approach’.
According to data provided by Facebook, in the last quarter of 2017 their proactive detection rate was 23.6 per cent.
“This means that of the hate speech we removed, 23.6 per cent of it was found before a user reported it to us,” the spokesperson said. “The remaining majority of it was removed after a user reported it. Today we proactively detect about 95 per cent of hate speech content we remove.”
“Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritise the more nuanced cases, where context needs to be considered, for our reviewers.”
There is no available data, however, when it comes to content in a specific language or country.
Facebook publishes a Community Standards Enforcement Report on a quarterly basis, but, according to the spokesperson, the company does not “disclose data regarding content moderation in specific countries.”
Whatever the tools, the results are sometimes highly questionable.
In May 2018, Facebook blocked for 24 hours the profile of Bosnian journalist Dragan Bursac after he posted a photo of a detention camp for Bosniaks in Serbia during the collapse of federal Yugoslavia in the 1990s.
Facebook determined that Bursac’s post had violated “community standards,” local media reported.
Bojan Kordalov, Skopje-based public relations and new media specialist, said that, “when evaluating efficiency in this area, it is important to emphasise that the traffic in the Internet space is very dense and is increasing every second, which unequivocally makes it a field where everyone needs to contribute”.
“This means that social media managements are undeniably responsible for meeting the standards and compliance with regulations within their platforms, but this does not absolve legislators, governments and institutions of responsibility in adapting to the needs of the new digital age, nor does it give anyone the right to redefine and narrow down the notion and the benefits that democracy brings.”
Lack of language sensibility
SHARE Foundation, a Belgrade-based NGO working on digital rights, said the question was crucial given the huge volume of content flowing through the likes of Facebook and Twitter in all languages.
“When it comes to relatively small language groups in absolute numbers of users, such as languages in the former Yugoslavia or even in the Balkans, there is simply no incentive or sufficient pressure from the public and political leaders to invest in human moderation,” SHARE told BIRN.
Berthelemy of EDRi said the Balkans were not a stand alone example, and that the content moderation practices and policies of Facebook and Twitter are “doomed to fail.”
“Many of these corporations operate on a massive scale, some of them serving up to a quarter of the world’s population with a single service,” Berthelemy told BIRN. “It is impossible for such monolithic architecture, and speech regulation process and policy to accommodate and satisfy the specific cultural and social needs of individuals and groups.”
The European Parliament has also stressed the importance of a combined assessment.
“The expressions of hatred can be conveyed in many ways, and the same words typically used to convey such expressions can also be used for different purposes,” according to a 2020 study – ‘The impact of algorithms for online content filtering or moderation’ – commissioned by the Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs.
“For instance, such words can be used for condemning violence, injustice or discrimination against the targeted groups, or just for describing their social circumstances. Thus, to identify hateful content in textual messages, an attempt must be made at grasping the meaning of such messages, using the resources provided by natural language processing.”
Hattotuwa said that, in general, “non-English language markets with non-Romanic (i.e. not English letter based) scripts are that much harder to design AI/ML solutions around”.
“And in many cases, these markets are out of sight and out of mind, unless the violence, abuse or platform harms are so significant they hit the New York Times front-page,” Hattotuwa told BIRN.
“Humans are necessary for evaluations, but as you know, there are serious emotional / PTSD issues related to the oversight of violent content, that companies like Facebook have been sued for (and lost, having to pay damages).”
Failing in non-English
Dragan Vujanovic of the Sarajevo-based NGO Vasa prava [Your Rights] criticised what he said was a “certain level of tolerance with regards to violations which support certain social narratives.”
“This is particularly evident in the inconsistent behavior of social media moderators where accounts with fairly innocuous comments are banned or suspended while other accounts, with overt abuse and clear negative social impact, are tolerated.”
For Chloe Berthelemy, trying to apply a uniform set of rules on the very diverse range of norms, values and opinions on all available topics that exist in the world is “meant to fail.”
“For instance, where nudity is considered to be sensitive in the United States, other cultures take a more liberal approach,” she said.
The example of Myanmar, when Facebook effectively blocked an entire language by refusing all messages written in Jinghpaw, a language spoken by Myanmar’s ethnic Kachin and written with a Roman alphabet, shows the scale of the issue.
“The platform performs very poorly at detecting hate speech in non-English languages,” Berthelemy told BIRN.
The techniques used to filter content differ depending on the media analysed, according to the 2020 study for the European Parliament.
“A filter can work at different levels of complexity, spanning from simply comparing contents against a blacklist, to more sophisticated techniques employing complex AI techniques,” it said.
“In machine learning approaches, the system, rather than being provided with a logical definition of the criteria to be used to find and classify content (e.g., to determine what counts as hate speech, defamation, etc.) is provided with a vast set of data, from which it must learn on its own the criteria for making such a classification.”
Users of both Twitter and Facebook can appeal in the event their accounts are suspended or blocked.
“Unfortunately, the process lacks transparency, as the number of filed appeals is not mentioned in the transparency report, nor is the number of processed or reinstated accounts or tweets,” the study noted.
Between January and October 2020, Facebook restored some 50,000 items of content without an appeal and 613,000 after appeal.
Machine learning
As cited in the 2020 study commissioned by the European Parliament, Facebook has developed a machine learning approach called Whole Post Integrity Embeddings, WPIE, to deal with content violating Facebook guidelines.
The system addresses multimedia content by providing a holistic analysis of a post’s visual and textual content and related comments, across all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). The company claims that automated tools have improved the implementation of Facebook content guidelines. For instance, about 4.4 million items of drug sale content were removed in just the third quarter of 2019, 97.6 per cent of which were detected proactively.
When it comes to the ways in which social networks deal with suspicious content, Hattotuwa said that “context is key”.
While acknowledging advancements in the past two to three years, Hattotuwa said that, “No AI and ML [Machine Learning] I am aware of even in English language contexts can accurately identify the meaning behind an image.”
“With regards to content inciting hate, hurt and harm,” he said, “it is even more of a challenge.”
According to the Twitter Transparency report, in the first six months of 2020, 12.4 million accounts were reported to the company, just over six million of which were reported for hateful conduct and some 5.1 million for “abuse/harassment”.
In the same period, Twitter suspended 925,744 accounts, of which 127,954 were flagged for hateful conduct and 72,139 for abuse/harassment. The company removed such content in a little over 1.9 million cases: 955,212 in the hateful conduct category and 609,253 in the abuse/harassment category.
Toskic Cvetinovic said the rules needed to be clearer and better communicated to users by “living people.”
“Often, the content removal doesn’t have a corrective function, but amounts to censorship,” she said.
Berthelemy said that, “because the dominant social media platforms reproduce the social systems of oppression, they are also often unsafe for many groups at the margins.”
“They are unable to understand the discriminatory and violent online behaviours, including certain forms of harassment and violent threats and therefore, cannot address the needs of victims,” Berthelemy told BIRN.
“Furthermore,” she said, “those social media networks are also advertisement companies. They rely on inflammatory content to generate profiling data and thus advertisement profits. There will be no effective, systematic response without addressing the business models of accumulating and trading personal data.”
Authorities in North Macedonia face an uphill battle to confront the dangers of online harassment, experts warn, following a public outcry over the reappearance of a group on the encrypted messaging app Telegram in which thousands of users were sharing explicit pictures and videos of women and girls, some of them minors.
The group, known as ‘Public Room’, was first shut down in January 2020, only to re-emerge a year later before it was closed again on January 29. Reports say new groups have since popped up, their membership spreading to neighbouring Serbia.
Authorities in the Balkan state have mooted the possibility of banning Telegram altogether and criminalising the act of stalking, making it punishable with a prison sentence of up to three years.
The case, however, has exposed the many layers that authorities need to address when it comes to preventing online harassment and sexual violence. And experts in the field say it will not be easy.
“This type of danger is very difficult to handle, given that many countries in the world have had the same or similar problems as North Macedonia,” said Suad Seferi, a cybersecurity analyst and head of the IT sector at the International Balkan University in Skopje.
Seferi cited blocks on Telegram in countries such as Azerbaijan, Bahrain, Belarus and China, but cautioned against following such a route given the risk of it being construed as censorship by those using the app for its primary purpose of simple communication.
“The government could try and reach an agreement, or communicate with Telegram to close specific channels or seek cooperation in prosecuting the perpetrators of such acts,” he told BIRN.
Law not being applied
An image showing the Telegram messenger app. Photo: EPA-EFE/MAURITZ ANTIN
The phenomenon has triggered heated debate in North Macedonia; a number of victims have spoken out publicly about how some of the 7,000 users of Public Room shared explicit, private photos of them or took pictures from their social media profiles and shared them alongside the names and phone numbers of the victims.
One of them, 28 year-old Ana Koleva, met Justice Minister Bojan Maricic over the weekend to discuss her own harrowing experience after her pictures began circulating in the Telegram group and elsewhere and she was bombarded with unwanted messages and phone calls.
Some victims, including Koleva, said they appealed to the police for help but were bluntly dismissed. One reason given by police was that they were unable to act unless the victim was a minor.
Critics say the group’s re-emergence exposes the failure of authorities to stamp it out in the first place.
“The ‘Public Room’ case revealed the inertia and inability of the authorities to act in such cases of violence and harassment of women and girls,” said Skopje-based gender expert Natasha Dimitrovska. “Although there are laws, they are not implemented.”
North Macedonia’s law on prevention and protection from violence against women and domestic violence also defines sexual harassment and especially online sexual harassment.
“This is in line with the Istanbul Convention, which states that all forms of sexual violence and harassment should be sanctioned,” said Dimitrovska. “In addition, endangering someone’s security and sharing and collecting other people’s personal data without permission are crimes that are already regulated by the Criminal Code.”
She told BIRN that it was imperative that authorities grasp the fact that whatever goes on online has repercussions offline.
“There is no longer a division between offline and online,” she said. “What happens online also has profound consequences in real life. Girls and women who are sexually harassed online are also restricted from accessing and expressing themselves freely in public.”
“What’s even worse is that everything that is online remains there forever and is widely available, so with online harassment it’s even more frightening in the sense that it will remain there for a long time and haunt the victim.”
‘Scary viral dimensions’
Illustration. Photo: Markus Spiske/Unsplash
Cybersecurity experts caution that it is extremely difficult to control or monitor content on platforms such as Telegram, which has become notorious for similar scandals.
In the US and the UK, there are laws against ‘revenge porn’, in which people share explicit pictures of their former partners as a form a retaliation. Six years ago, only three US states has such laws in place. They have since spread to at least 46.
Privacy and data protection expert Ljubica Pendaroska said some public ‘supergroups’ can have up to 200,000 members, which massively increases the chances of privacy violations.
“Usually, in the communication in such groups, the spectrum of personal data related to the victims is supplemented with address of residence, telephone number, information about family members, etc, Pendaroska told BIRN.
“So the invasion of privacy gets bigger and usually goes out of the group and the network, taking on really scary viral dimensions.”
Importance of raising public awareness
To combat such acts, experts advocate raising public awareness about privacy and how to protect it – particularly among parents and children – and punishing violations in a timely manner.
“From experience, young people know a lot about the so-called technical aspects, capabilities and impacts of social networks and applications, but little about their privacy and especially the potential social implications of what is shared in the online world,” said Pendaroska, who also serves as president of Women4Cyber North Macedonia, an initiative to support the participation of women in the field of cybersecurity.
“Our concept is to avoid occasional action but commit to consistent and continuous education of women about the potential risks that lurk in the online world,” she told BIRN, “because that’s the only way to achieve long-term results and to raise awareness.”
“Therefore, our plan is within each project or activity that we implement, to include exactly that component – through various activities and tools to educate women, because awareness is key.”
North Macedonia’s authorities on Thursday threatened to block the messaging app Telegram over the activities of a group of more than 7,000 users who have been sharing and exchanging explicit pictures and videos of girls – some of whom are underage.
Some users even wrote the names and locations of the girls. Others have shared photoshopped images taken from their Instagram profiles.
Prime Minister Zoran Zaev said the authorities would not hesitate to block Telegram if they had to – and if the messaging app didn’t permanently close this and similar groups.
“If the Telegram application does not close Public Room, where pornographic and private content is shared by our citizens, as well as child pornography, we will consider the option of blocking or restricting the use of this application in North Macedonia,” Zaev wrote in a Facebook post.
The group, called Public Room, was first discovered in January 2020. The authorities then said that they had found the organisers and had dealt with the matter.
However, a year later, the group has re-emerged, sparking a heated debate in North Macedonia over police inaction.
Several victims whose pictures and phone numbers were hacked and used have complained about what happened to them – and about what they see as lack of action of the part of the authorities in preventing it.
“I started receiving messages and calls on my cell phone, Viber, WhatsApp, Messenger and Instagram,” one 28-year-old victim, Ana, recalled in an Instagram post.
“I didn’t know what was happening or where it was coming from. The next day, I received a screenshot of my picture, which was not only posted in Public Room but shared elsewhere. I didn’t know what to do. I panicked, I was scared, I’d never experienced anything like that,” she added.
But the woman said that when she told the police about what happened, they told her they couldn’t do much about it, since she wasn’t a minor.
North Macedonia’s Minister of Interior, Oliver Spasovski, said on Thursday that the police had arrested four people in connection with the revived group and had launched a full-scale investigation.
“We have identified more people who will be detained in the coming period, so we can reach those who created this group, and also those that are abusing personal data within the group. We are working on this intensively with the Public Prosecutor,” Spasovski told the media.
However, following closure of the group on Thursday, there have been reports that some of its users are opening new groups where they continue the same practices.
Prime Minister Zaev said users of this and similar groups needed to heed a final warning.
“I want to send a message to all our citizens who are sharing pictures and content in that group [Public Room] … to stop what they are doing that and leave the group,” said Zaev on Facebook.
“At the end of the day, we will get the data, you will be charged and you will be held accountable for what you do,” he concluded.
We’re looking for people who are willing to share their experience with us to help in a story we’re currently working on. Scroll down for information on how to part take.
The key things we want to know:
What type of violations have you reported?
In what language was the content?
How was the report processed?
What do we consider to be violations of social media community guidelines:
Violent threats (direct or indirect)
Harassment, which entails inciting or engaging in the targeted abuse or harassment of others
Hateful conduct, which entails promoting violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or disease.
Things to note:
We are looking for social media users that reported content in the Bosnian, Serbian, Montenegrin, Albanian, and Macedonian languages. We want to hear as many different experiences from all around Southeast Europe.
Your stories will be used to help us with an ongoing investigation.
How to take part?
To submit your experience, all you need to do is fill out this form.
Experts told an online debate hosted by the Balkan Investigative Reporting Network on Tuesday that the current regulation systems for online media in the Western Balkans are not good enough, but efforts to curb the publication of hate speech and defamatory comments must not tip over into censorship.
Media and legal experts from Albania, Bosnia and Herzegovina, Montenegro, North Macedonia and Serbia who spoke at the debate entitled ‘Case Law and Online Media Regulation in the Balkans’ also said that the application of existing legislation is inadequate.
Authorities often rely on legislation that was developed for traditional media which has not been adapted accordingly, or on self-regulation which is not mandatory.
Lazar Sandev, an attorney at law from North Macedonia argued that “those who create public opinion regarding matters of public interest do not uphold any standards, they do not have any legal knowledge”.
Jelena Kleut, associate professor at the University of Novi Sad’s Faculty of Philosophy, said that in Serbia there is lack of willingness to apply standards in online media, and noted a difference between rich and poor media outlets as well as responsible and not responsible ones.
“The wealthy, irresponsible media – they have legal knowledge but they don’t care. They would rather see the complaints in court, pay a certain amount of fines and continue along, they don’t care. On the other end of the spectrum, we have responsible but poor media,” Kleut said.
The media experts also debated the controversial issue of reader comment sections on websites, which some sites around the world have removed in recent years because of a proliferation of hate speech, defamation and insulting language.
According to Montenegro’s Media Law, which came in force in August this year, the founder of an online publication is obliged to remove a comment “that is obviously illegal content” without delay, and no later than 60 minutes from learning or receiving a report that a comment is illegal.
Milan Radovic, programme director of the Civil Alliance NGO and a member of the Montenegrin Public Broadcaster’s governing council, argued that this “it is clear that in such a short period of time, if it is applied, will damage those affected, but also damages for freedom of expression”.
Edina Harbinja, a senior lecturer at Britain’s Aston University, warned that there is a conflict between regulatory attempts and media freedom, and that “this is when we need to be careful in how we regulate, not to result in censorship”.
This was the second debate in a series of discussions on online media regulation with various stakeholders, organised as a part of the regional Media for All project, which aim to support independent media outlets in the Western Balkans to become more audience-oriented and financially sustainable.
The project is funded by the UK government and delivered by a consortium led by the British Council in partnership with BIRN, the Thomson Foundation and the International NGO Training and Research Centre, INTRAC.
The day starts with coffee and unread messages: a few from friends, a few work related, a paid furniture ad, and one with lots of exclamation marks that indicates that it must be read immediately before it is deleted from the Internet. This is because it reveals a big secret, hidden from ordinary people.
That “secret” may refer to the “fake” pandemic, the “dangerous” new vaccine, the “global conspiracy against Donald Trump”, the “dark truth about child-eating elites” – an especially a popular term – and so on.
The sender or sharer may well be an ordinary person that we know personally or through social networks, and who sends such content for the first time or occasionally.
Spreading misinformation through personal messages has become increasingly common in North Macedonia, as elsewhere.
But this is not the only novelty. As the fight against fake news has intensified, with changes of algorithms on social networks and the inclusion of independent fact-checkers, so have the techniques that allow false content to remain undetected on social networks for as long as possible.
“Sending personal messages is an attempt to spread misinformation faster, before it can be detected,” explains Rosana Aleksoska, from, Fighting Fake News Narratives, F2N2, a project led by the well-known Skopje based NGO MOST, which searches for misinformation on the Internet.
Among the newer methods used to avoid detection, she notes, is the mass sharing of print screens instead of whole texts, and, in countries that use Cyrillic script like North Macedonia, Cyrillic and Latin letters are deliberately mixed.
Spreaders of misinformation are always in search of new ways to avoid detection. Illustration photo: BIRN
See and share before it’s removed
One video that recently went viral on social networks in North Macedonia, fuelling panic about COVID vaccines, was released on December 8.
In it, a former journalist appears to interpret a document outlining possible contra-indications in and side-effects from the newly developed Pfizer vaccine against COVID-19 – but presents them as established facts.
It got more than 270,000 views and 5,300 shares on Facebook.
While the video reached a large audience, those numbers only partly show just how far the misinformation spread.
The video soon found itself in the inboxes of many other people, after Facebook acquaintances sent it to them in a direct message, urging them to see it as soon as possible, before it was deleted or marked as fake.
People who believe in conspiracy theories, or regularly participate in disseminating them, send direct messages to each other, informing them that new material has been released.
At a first glance, one might think it sounds like a small obscure group, hanging out online.
But the results of a recent public opinion poll conducted by the Balkans in Europe Policy Advisory Group, BiEPAG, showed that only 7 per cent of the population in the region do not believe any of the best-known conspiracy theories, and over 50 per cent believe in all of them. The combined percentage of all those who said they believed in all or just in some of the theories was over 80 per cent.
With these huge numbers, it is not surprising that more misinformation also ends up in the virtual mailboxes of those who “don’t believe”, persuading them to switch sides. Some of these people receive three or four such messages a week.
What the messages have in common is that they are accompanied by urgent words: “See this before they delete it from Facebook”, or, “Share and disseminate”, or “They could no longer remain silent, take a look”, etc.
Because people pay more attention to personal messages than to other social media posts, they are more likely to see this content. They may well also spread them, explains Bojan Kordalov, a Skopje-based expert on social networks and new media.
“The way they are set up and designed, fake news gives people a strong incentive to spread them,” he said.
The pandemic was the main topic of misinformation this year, but in North Macedonia this topic intertwines with others, ranging from Euro-Atlantic integration to politics, Aleksoska from F2N2 observes.
“The object of the attack is people’s emotions – to provoke an intense reaction,” she says.
As the year went on, the subject of messages also changed. At first they focused on the “false” nature of the virus, and then later on how there was no need to wear masks or observe social distancing and other health-protection measures.
After the breakthrough in discovering a vaccine was made, the messages began to focus on the alleged dangers and health risks of vaccination.
The way they are set up and designed, fake news gives people a strong incentive to spread them. Illustration photo: BIRN
“Don’t believe, check” – as we instruct you
The video about the supposed effects of the vaccine that gained traction in North Macedonia is a typical example of what typical disinformation looks like. Similar videos are produced every day.
Among the private messages received by social networks users are videos of people posing as doctors from the US, Canada, Belgium, Britain or Germany, filming themselves with webcams, warning that vaccines may well be deadly.
In one video, which focuses on reading the instructions on the Astra Zeneca vaccine, it is also clear that the creators of fake news use the same messages as those who fight fake news, such as: “Don’t believe, check”.
However, they also provide the guidelines about what to “check”.
“Don’t trust us, investigate for yourself. For example, visit these sites. Or google this term, ChAdOx-1. See here, it says – micro cloning,” the narrator in this video can be heard saying as the inscriptions from the vaccine packaging are displayed.
“They convince us that it is safe, but the traces are here in front of us,” the narrator adds, in a dramatic tone.
The pandemic was the main topic of misinformation this year. Illustration photo: BIRN
Finding new ways to bypass filters
Although outsiders have no direct insight into exactly how social networking algorithms detect suspicious content, as they are business secrets, many experts on these technologies told BIRN that certain assumptions can be drawn.
As the creators of disinformation can also be technologically savvy, they have likely drawn their own conclusions and seek new ways to bypass known filters.
One common alarm is when content goes viral quickly. This signals to social networks that the content needs to be checked. But if several different messages containing the same main point are sent, instead of one identical message, the protection algorithms may have a harder time detecting the content’s risk.
Apart from masking the content, spreaders of misinformation use different formats to avoid detection.
Print screens of articles and of social media posts may be shared instead of the actual articles or posts. Some users even do this with their own posts, and republish them as photos.
“Print screens are common in conducting disinformation campaigns. This is just one of the mechanisms they use,” Aleksoska explains. “The problem is much bigger, so the answer must be comprehensive and coordinated.”
Print screens are not only more difficult for the software to detect, but make it harder for people to check, especially if the name of the media outlet that published the content is omitted or cut from the photo.
The part of the internet in North Macedonia recently saw a print screen from a Swiss media outlet circulating with the title in German reading: “Currently no vaccine can be approved.” Hundreds of people shared it.
The publisher that first spread this print screen claimed that the Swiss had rejected the German vaccine “because of the risk of death”.
But the real text does not say at all that Switzerland rejected the German vaccine but only that it will first implement a risk control strategy “to prevent side effects or fatalities”.
This way, those who spread fake news have a clear advantage over those who fight to stop it.
In order to reach the original article, one has to first rewrite the title in German in a search engine, find the text with an identical title among the results and translate it with an online tool. While doing this, ten people will have since received this print screen and will just click “Share”.
Print screens in North Macedonia have also recently been used to spread untrue information about the current dispute between North Macedonia and its neighbour, Bulgaria, which has refused to allow Skopje to start EU accession talks.
Some of these posts present Bulgaria’s demands as something that North Macedonia already accepted.
Another technique used to avoid or baffle filters is mixing Cyrillic and Latin letters that are identical in meaning or form, like the letters a, e, n, x, u, j, s, as well as some others.
When a social media user complains that a post has been removed from their profile, in some cases, another user will advise them next time to mix up the letters, making it harder to detect problematic content.
Some people spread fake news because they believe in it and think they are contributing. Photo: Pixabay
Ideological foot-soldiers do the hard work
But why would anyone advise others on how to make it harder to for social networks to detect their problematic content.
Checking some of the profiles that publish and spread misinformation reveals that, besides the usual suspicious suspects – like thematic profiles with false names that only publish information from one or more sources, or people who are part of formal or informal organizations and spread their ideology – a large number of users have no known connection to disinformation networks.
Most are ordinary people who do not hide their identities, publish photos of family trips, but also from time to time share some “undiscovered truth” about the coronavirus or a “child abuse plot” – wedged between lunch recipes and pictures of walks in parks.
Fact-checkers and communication technology experts agree that disseminating misinformation is a highly organised activity, often done with a malicious intent – but also that many people share such content without hidden motives. They clearly feel a responsibility to be “on the right side”.
“Some people spread fake news because they believe in it and think that by doing so they are contributing to some kind of fight for the truth to come to light,” Kordalov explains.
This makes the fight against misinformation even more difficult, because while organised networks create and spread false news at the top, most of the work of dissemination is done by individuals and micro-communities that have no connection to them, or even between each other.
“All conspiracy theories are just pieces of the master theory that says that certain elites rule the world. The more somebody believes in that, the more likely he or she would read and share content supporting this theory,” Aleksoska notes.
However, there are some solutions. Algorithms, according to Kordalov, can be reprogrammed to recognise new forms of false news. No final answer can be found to misinformation, he admits, but the two sides constantly compete and the side that invests most effort and resources will lead in the end.
Technological competition, however, is not enough if it is not matched by stronger institutional action, because creating mistrust in institutions is one of the main goals of disinformation campaigns.
Kordalov says it is not enough for the PR services of institutions just to issue announcements rebutting fake news related to their work each time they spot it. They must be actively involved in a two-way communication and react to false news quickly.
“This is often called ‘damage control’ but this is not the point. Their [institutions’] job is to serve the citizens, and providing real information is part of that service,” he says.
One way for institutions to protect public trust in them is to provide high quality services, he adds. If they work well, and if citizens feel satisfied with them, it will be harder for disinformation to hurt them.
BIRD Community
Are you a professional journalist or a media worker looking for an easily searchable and comprehensive database and interested in safely (re)connecting with more than thousands of colleagues from Southeastern and Central Europe?
We created BIRD Community, a place where you can have it all!