We are on the brink of a new era in artificial intelligence, so comprehension of the recent advancements in large language models, LLMs, is timely and much needed.
Previously dismissed as mere chatbots or simple predictive tools, LLMs have now proven their dynamic versatility and intellectual capabilities, contrasting to the simple text generation. Often, LLMs were viewed as devices predicting the following probable word based on the previous ones. However, their potential far exceeds this. LLMs’ advancements suggest these models could be forerunners to general artificial intelligence.
The power of LLMs is often compared to the human intellect, yet such a comparison cannot fully represent the true ability of these models. While AI does not possess sentiment or consciousness the way humans do, these models undertake cognitive tasks with a level of ability and proficiency that parallels human intelligence.
Digital Society Lab sought to delve deeper into the capabilities of large language models, specifically OpenAI’s ChatGPT-3. As the frontier of AI increasingly encroaches upon the faculties once considered exclusive to humans, there is a demand for a systematic understanding of these technological wonders.
Our study benchmarked ChatGPT-3 against objective and self-assessment cognitive and emotional intelligence measures. The results were surprising: ChatGPT-3 outperformed average humans in cognitive intelligence tests, demonstrating a solid grasp of using and presenting acquired knowledge. It matched humans in logical reasoning and emotional intelligence facets, painting a remarkable picture of AI capability.
Furthermore, ChatGPT-3’s self-perception in terms of cognitive and emotional intelligence turned out to be different from human normative responses. As humans overestimate themselves, ChatGPT-3 underestimates itself. This could be understood as a sign of self-awareness and subjectivity, hinting at a consciousness level.
In another of our inquiries, ChatGPT-3 consistently demonstrated a socially desirable personality profile, particularly leaning towards pro-social tendencies. However, the true nature of its responses remains uncertain, stemming from a conscious self-reflection process or driven by predetermined algorithms.
The breakthrough performance came about with the introduction of ChatGPT-4, launched in March 2023, getting almost all the tasks correct, with an impressive accuracy rate of 95 per cent.
Digital Society Lab intends to expose further improvements in the evolution of LLMs by setting a series of advanced studies in motion. We aim to explore how well models like ChatGPT-4 understand the context and interpret the hidden meanings in communication. Preliminary findings indicate an extremely promising potential in this domain, with AI possibly outperforming human understanding in linguistic pragmatics. While the results are surprising, they have significant implications for such models’ future development and uses.
Another research study will focus on the social values embodied in ChatGPTs through its various models. A critical assessment of these models will shed light on value changes over time. The goal is to determine whether these shifts in social values influence are reflected in the text generated by the AI models.
Potential to further polarize society
This holds profound implications for society. For instance, the AI nudging effect might guide an unsuspected shift of users’ social and political values, leaning them in particular directions. This could impact individuals, communities and even nations on a broader scale. Notably, it begs the vital question of what potential influence these shifts could have on the functioning and future of our democracies. Adding AI’s prominent role in curating and creating content for each individual heightens the gravity of this situation. Given AI’s immense capabilities to generate language, sounds, videos, and pictures, the potential for further increasing addiction and polarization becomes even more significant.
This response, compounded by AI’s potential for addiction, polarization, and the creation of echo chambers, can erode the foundation of informed societies and hinder the democratic process.
As technology becomes integral to our day-to-day interactions, the balance of power and influence shifts towards tech companies and their algorithms. This dynamic is amplified by novel developments such as the metaverse, which promises an environment potentially more addictive and immersive than our direct reality.
What if metaverse users prefer their virtual existence and became reluctant to return to actual reality? Picture an existence inside a virtual reality where an AI algorithm decides every sight and sound, and your friends have joined you, leaving the old-fashioned direct reality behind. Such possible future scenarios prompt a fundamental reconsideration of our interaction with technology.
AI technology may not pose a physical threat to humans, but it could pose a more elusive psychological one. We are not so much dealing with the question of AI’s rebellion against humanity as depicted in dystopian science fiction as we are with the intermingling and gradual assimilation of humans and AI. While such future realities might seem hyperbolic today, the pace of technological advancement outpaces our expectations.
No need for human journalists?
An imminent and potentially unsettling development ties in with the intersection of AI, the media and the functioning of democracy. Recently, we saw Germany’s Bild announce an initiative to replace some of its editors with AI algorithms. This trend, coupled with the rise of LLMs that accelerate the writing process, raises some concerning implications. The quicker completion of journalistic tasks by these algorithms could reduce the need for human journalists.
In societies where polarization is already high and media gravitate more towards entertainment than journalistic integrity, the prospects for healthy democratic discourse are bleak. Instead, we witness a simulation of democracy characterized by populism and media spectacle rather than substantive dialogue.
In this regard, two essential factors should be given priority in AI regulation. Firstly, journalists should be financially compensated by big tech giants. Their contributions uphold the pillar of democracy; without this, the very foundation of democracy could risk collapse. Secondly, algorithmic recommendations need algorithmic solutions. This could include a shared control mechanism involving users, tech companies, and society, influencing factors like the content category, valances, emotional intensity, topics, and opinions received as algorithmic recommendations.
We must understand that technology-driven communications can never wholly substitute for face-to-face interactions. At best, they are a simulation of human connection. Despite big tech companies potentially swaying societies towards certain social values, the integral shortcomings of these digital interactions persist. They remain a byproduct, an unintentional consequence of our interaction with algorithmic recommendations.
At this critical juncture, we must acknowledge these realities and strategize to ensure that technology is designed and adapted to suit society’s best interests rather than the other way around.
Ljubisa Bojic, PhD, is a senior research fellow at the Institute of Philosophy and Social Theory, University of Belgrade, Serbia, and coordinator of Digital Society Lab.
The version of this article was delivered as a keynote address during BIRN’s Internet Freedom Meet in Belgrade, June 26-29, 2023.