By ‘misinformation’, they do not mean inaccurate information. Rather, it stands for the dissemination of messaging and assessments conflicting with the elites’ top agenda. It is indeed a fragile commodity as tomorrow’s ‘misinformation’ can still be today’s ‘truth’.
The Davos World Economic Forum (WEF) organisers hailed ‘misinformation’ as the ‘biggest short-term risk’ and one of the biggest long-term risks (alongside extreme weather and changes to ecosystems) to humanity. At last year’s meeting, misinformation was not shortlisted as a threat.
So, what has changed?
We are entering a year of mega-elections involving more than half of the world’s population in 70 nations, including Russia, India, the EU and the US.
Generative AI is conducive to unlimited amount of content that cannot be controlled.
The chasm between the interests of political elites and those of ordinary people is growing, spurring bad blood and leading to the cratering polls of the world’s leading political heavy-hitters against the unprecedented rise of alternative and non-establishment politicians and parties.
Translated into the Davos speak, ‘the potential impact of [AI-induced] disinformation is heightened in a context marked by social polarisation, fragile democracies, geopolitical tensions, and a challenging tech environment’.
They note that the ‘authoritarian nations’ (China, Russia, Iran and the Persian Gulf monarchies) have successfully proven the competitiveness of their socioeconomic and technology-policy models. China supplying innovation and technology, Russia outstripping the EU in economic growth and the UAE and Qatar getting richer without holding elections have all become a major concern for the West.
So, what do you need to know about the ‘misinformation’ as portrayed by the global newspeak?
By ‘misinformation’, they do not mean inaccurate information. Rather, it stands for the dissemination of messaging and assessments conflicting with the elites’ top agenda. It is indeed a fragile commodity as tomorrow’s ‘misinformation’ can still be today’s ‘truth’. A legitimate or even debatable statement that can be shared in a professional setting is promptly decried as ‘misinformation’ when shared with larger audiences.
This was the case during the Covid pandemic. Uttering or posting an incontrovertible and scientifically sound statement about the vaccines having contraindications and side effects – just like any other medication does – sparked social media bans, de-platforming and even criminal charges.
‘Deterring’ the threat of misinformation requires constant control spanning all levels from strategic (controlling social media and legacy media to nip the ‘spread of misinformation’ in the bud) to situational (following the changeable agenda and labelling statements as ‘misinformation’ as deemed fit). However, given that AI is capable of generating all sorts of content, it seems like a tough ask.
How do you address the threat other than resorting to ‘content checking hubs’? The latter devoured plenty of money and resources during the pandemic and have already proven inefficient.
New social media laws holding the platforms accountable for user-generated content and vows to swiftly delete the information (a new take on censorship) are being enforced all over the world from Brazil to the EU nations.
A raft of new ‘anti-hate speech and anti-discrimination’ legislation criminalising amy emotionally charged statement in the public domain, the political discourse and the realm of art and culture is being rushed on a global scale too.
‘Self-censorship’ is being forced onto tech giants. Elon Musk-owned X (former Twitter) has been drawing a lot of flak and sparking outrage as the platform’s policies do not encourage self-censorship, something that cannot be said of AI developers. At WEF, OpenAI unveiled a roadmap and a set of guidelines that will be used to counter ‘misinformation’ during the election campaigns. Simply put, the chatbot will be barred from creating politically ‘unsafe’ content based on a list of specific buzzwords and concepts.
The event organisers are planning more projects aimed at boosting tolerance, climate action awareness and other types of awareness benefitting the global top dogs as well as spreading the ‘good news’ among the nations of the Global South.
This is it for the released version of the plan so far. Let us wait and see if it pans out. Chances are, it will not. But the fact that the free circulation of information offering a critical stance on the actions of national and global elites ‘undermines the world order’ and becomes the biggest threat to the civilisation in 2024 is an integral part of what WEF participants refer to as ‘polycrisis’.