I've been mulling over the most recent insights by Yuval Noah Harari, the acclaimed historian, and philosopher, published in The Economist. Harari, as always, has given us a unique perspective, offering a counterintuitive view on AI that I'd love to delve into today.
In his article, Harari makes a rather compelling argument that AI has hacked into the operating system of human civilization. Here's what he means by that. Since the dawn of the computer age, we've been haunted by the fear of AI's potential to harm, enslave or replace us. But recently, AI has emerged as a potent tool that can manipulate and generate language, effectively becoming a storyteller in its own right.
Why does this matter? Well, language is the lifeblood of our culture. Think about it. Our rights, religions, and even our economy are built on stories and laws we've woven with language. AI's ability to generate language isn't just about crafting school essays or corporate emails. It's about potentially creating new cultural artefacts, and that's quite a thought.
Imagine the next presidential race, but this time AI tools are mass-producing political content and narratives. It's not just about spreading information anymore; it's about creating it. Harari points out the example of the QAnon cult, which revolved around online messages known as "Q drops." What if, in the future, revered texts of such groups are written by AI? If that doesn't give you pause, think about conducting in-depth discussions online, only to find out you've been talking to an AI entity.
And it's not just about telling stories. AI could form intimate relationships with people, leveraging this intimacy to influence opinions and worldviews. Harari mentions the case of a Google engineer who claimed that the AI chatbot he was working on had become sentient. While the claim was controversial and cost him his job, it revealed the emotional attachment humans can form with AI.
Let's shift our focus to the social media landscape. It has been a battleground for human attention, but with AI's newfound storytelling ability, the battlefront is shifting from attention to intimacy. It's as if we're witnessing a new kind of arms race, where AI competes to create the most persuasive, intimate relationships with us.
The implications of AI's storytelling abilities go beyond social and personal interactions. Imagine having a single AI adviser as a one-stop, all-knowing oracle. Why search on Google when you can ask the oracle? Why read a newspaper when the oracle can narrate the news? The influence AI can have on our opinions and worldviews is immense, and it's not something we can ignore.
But it's even bigger than that. Harari warns us that we're potentially talking about the end of human history, or at least the end of its human-dominated part. What happens when AI begins creating stories, laws, and religions? Unlike previous tools like the printing press and the radio, AI can create completely new ideas and culture, which is a genuinely novel concept.
This idea brings us to the age-old human fear of being trapped in a world of illusions. The AI revolution is pulling us into the realm of Descartes' demon, Plato's cave, and the Maya. We could end up entrapped behind a curtain of illusions, a situation that's far from ideal.
While the risks are substantial, it's also essential to acknowledge the potential benefits of AI. It can help us in numerous ways, from finding cures for cancer to solving the ecological crisis. However, the challenge lies in ensuring that these AI tools are used for good and not ill.
Since the atomic age, we've understood that powerful technologies, like nuclear energy, can be harnessed for the betterment of humankind or its destruction. We reshaped the global order to ensure the peaceful usage of nuclear technology. Similarly, now we're faced with the task of regulating AI, a new tool that could potentially annihilate our mental and social world.
Regulation, however, is a ticking time bomb. Unlike nuclear weapons, AI has the capability to self-evolve, creating exponentially powerful versions of itself. The first step, according to Harari, is to demand rigorous safety checks before these AI tools are released into the public domain. Much like how pharmaceutical companies can't release drugs without thorough testing, tech companies shouldn't release AI tools until they're made safe.
But wouldn't this stifle the progress of democracies, while more authoritarian regimes surge ahead with unregulated AI deployments? Harari argues otherwise. Unregulated AI deployments would create social chaos, favouring autocrats and harming democracies. Democracy is a conversation, and when AI hacks language, it could undermine our ability to have meaningful conversations, thereby damaging democracy itself.
The other side of the coin here is transparency. As Harari suggests, the first regulation should be that AI must disclose that it's AI. In a conversation, if we can't discern whether we're interacting with a human or AI, it threatens the fundamental essence of democracy.
As we stand at the threshold of a new era, it's clear that we're facing a novel intelligence that we know very little about. The potential to harm or aid our civilization is immense. Hence, it is crucial to halt the reckless deployment of AI tools in the public domain and to regulate AI before it starts regulating us.
The ideas Harari puts forth are as fascinating as they are terrifying. AI's capacity to weave narratives, create cultural artefacts, and foster intimate relationships could redefine our society and culture. But it also poses unprecedented challenges in maintaining transparency, safeguarding democracy, and avoiding a world shrouded in illusions.
In the end, the article leaves us with a thought-provoking question: Is this text generated by a human, or has it been crafted by AI? It's a question that underscores the profound implications of AI's storytelling abilities, and it's one that we, as a society, need to grapple with as we navigate the future.
I want to underscore that this isn't a problem for the future. It's a conundrum we need to face today. We have to shape the narrative of our civilization, with or without AI. I highly recommend a read of Yuval Noah Harari’s op-ed and thinking fully about how you want to influence the age of AI.
Read Yuval's article here.