Lack of confidence in the Serbian government’s ability to handle AI is a symptom of a wider lack of trust in official institutions that are rarely transparent.
In a world increasingly driven by artificial intelligence, AI, public trust in the institutions developing and implementing this transformative technology is paramount.
From smart cities to personalized healthcare, AI has the potential to revolutionize every aspect of our lives. However, its rapid development raises ethical and societal issues. In Serbia, as in many other parts of the world, there is a palpable tension between the promise of AI and the public’s trust in the institutions at the forefront of its development and application.
A recent study has delved into the public’s trust in various actors involved in the management and development of AI for the public’s best interest. The results reveal the need for a comprehensive approach to address the trust deficit and foster a more inclusive and transparent AI ecosystem.
On the one hand there is discussion in the scientific community about the application of AI. On the other, there is a gap regarding public attitudes about this use. In addition to the main ethical issues associated with the use of AI, there are also issues of people’s awareness and their knowledge about it, as well as trust not only in AI but in those who develop and apply it. All of this is necessary to implement AI as successfully as possible.
Concern about AI fed by distrust in institutions generally
Photo by EPA-EFE/SASCHA STEINBACH
In Serbia, public distrust extends beyond the realm of AI and encompasses a broader skepticism towards political actors and the situation in the country in general. A history of political instability, corruption and a lack of transparency in decision-making processes has fueled this distrust. The erosion of democratic norms, which were weak to begin with, has contributed to a pervasive distrust in public institutions and political actors.
Moreover, the Serbian government has faced criticism for handling crises such as the COVID-19 pandemic and its perceived lack of accountability. These factors have compounded the mistrust and skepticism toward the government and other political actors. One example of damaged trust is the misreporting of COVID-19 deaths originally reported by a BIRN report in the first year of the pandemic. This was later confirmed by the study published in Annals of Epidemiology that shows the number of deaths was more than threefold higher.
Consequently, the Serbian public has a general sense of disillusionment, which extends to their perception of AI development and implementation. In the context of AI, this broader distrust manifests itself in skepticism toward the government’s ability to manage and develop AI that serves the public’s best interests. The results of the recent study reflect this sentiment, revealing a significant trust deficit in the government, the Ministry of Interior and the Ministry of Justice. It indicates a preference for non-governmental and international actors over national and governmental institutions in the development and management of AI.
US public also has misgivings
Photo by Pixabay
The research “US Public Opinion on the Governance of AI” (Zhang and Dafoe, 2020) showed that American citizens showed low trust in government organizations, corporate institutions and decision-makers who should develop and apply AI in the public interest.
Regarding trust in actors to develop and manage AI, university researchers, and the US military are the most trusted groups to develop AI: The US public trusts university researchers (50 per cent, a fair or a great deal of confidence) and the US military (49 per cent). Americans express slightly less confidence in tech companies, non-profit organizations (e.g. OpenAI), and US intelligence organizations. They rate Facebook as the least trustworthy of all the actors.
Generally, the US public expresses greater confidence in non-governmental organizations than in governmental ones. Some 41 per cent of Americans express a great or fair amount of confidence in tech companies.
Individual-level trust in various actors to responsibly develop and manage AI does not predict one’s general support for developing AI Institutional distrust and does not predict opposition to AI development.
New Ipsos polling finds the similar results regarding American mistrust in government or companies.
Serbs trust their government least of all
Photo by EPA/RALF HIRSCHBERGER
In Serbia, a study has been conducted within the research project “Ethics and AI: Ethics and Public Attitudes towards the Use of Artificial Intelligence” from 2021-2022.
The study aimed to examine various aspects related to the use of AI, among which was the trust in different institutions to develop and use AI in the public’s best interest. The following were questioned: the government, the army, the RS Ministry of Interior, the Ministry of Health, the Ministry of Justice, public companies and local governments, researchers from universities, international research organizations (such as CERN), technology companies (e.g. Google, Facebook, Apple, Microsoft), and non-profit AI research organizations (like OpenAI).
The results showed that the public has the least trust in the government of Serbia, followed by the Ministry of Interior and the Ministry of Justice, while the most trust is in researchers from the University and international research organizations (such as CERN). It was shown that even 70 per cent of respondents have no confidence in the government, and 66.4 per cent in the Ministry of Interior of the RS.
From a hierarchical point of view, respondents have the most trust in researchers from the university, international research organizations, non-profit organizations for AI research, and technology companies, then significantly less in the Ministry of Health, the army, public enterprises, and local governments, the RS Ministry of Justice, the Ministry of Internal Affairs and the government.
Addressing the ‘trust deficit’ requires more transparency
Serbian President Aleksandar Vucic (L) talks during the Bled Strategic Forum in 2020 focused on cybersecurity, digitalisation and European security. Photo by EPA-EFE/IGOR KUPLJENIK
Addressing the trust deficit in Serbia requires a multifaceted approach beyond AI and addresses the root causes of distrust in political actors and public institutions. Firstly, there is a need for greater transparency in decision-making processes across all levels of government. This can be achieved by implementing open government initiatives, public consultations, and increased access to information. Secondly, efforts must be made to tackle corruption and strengthen the rule of law. This includes implementing comprehensive anti-corruption measures, enhancing judicial independence, and ensuring accountability for public officials.
Lastly, in the context of AI, involving the public in the decision-making processes related to developing and implementing this technology is crucial. This can be achieved through public consultations, citizen assemblies, and other forms of participatory decision-making. Additionally, there must be a concerted effort to educate the public about AI’s potential benefits and challenges and to address their concerns transparently and openly.
In a world increasingly driven by AI, trust forms the bedrock of a socially and ethically responsible approach to harnessing the potential of this transformative technology. By addressing the trust deficit in Serbia and fostering a more inclusive and transparent approach to the development and implementation of AI, we can build a stronger and more resilient society better equipped to navigate the challenges and opportunities of the AI era.
What needs to be achieved regarding this topic is educating the public about artificial intelligence and increasing transparency and dialogue between the public, experts, the private sector, and the state. How can this be accomplished? The first step is education. Secondly, more transparency is also needed in decision-making processes related to the implementation of AI. It is necessary to act in two directions: to educate the general public in Serbia and to increase transparency and dialogue between the public on the one hand and the state and the private sector on the other.
Understanding the nuances of public trust in this context is essential for developing policies and practices that are ethically sound, socially acceptable, and successful in harnessing the potential of AI for the betterment of society.
The trust deficit in Serbia, particularly in institutions developing and implementing AI, is a symptom of a broader crisis of confidence in political actors and public institutions. As AI continues to transform every aspect of our lives, it is crucial to address this trust deficit to ensure that the development and implementation of this technology are ethically sound, socially acceptable, and, ultimately, successful.
Marina Budic is a Research Assistant at the Institute of Social Sciences in Belgrade. She is a philosopher (ethicist), and she conducted a funded research project, Ethics, and AI: Ethics and public attitudes towards the use of artificial intelligence in Serbia.
The opinions expressed are those of the author only and do not necessarily reflect the views of BIRN.