Will AI replace us? Yuval Noah Harari’s stark warning about a future without borders

Will AI replace us? Yuval Noah Harari’s stark warning about a future without borders

Technology


In yet another illuminating conversation, renowned author Yuval Noah Harari, known for his acclaimed works ‘Sapiens’ and ‘Nexus’, shared his unique perspective on the rapid rise of AI and how it will impact humanity. “AI will not be one big AI. We are talking about potentially millions or billions of new AI agents with different characteristics, again, produced by different companies, different countries,” the author said in his latest conversation at the WSJ Leadership Institute.

During the conversation, one of the guests asked Harari that through history, organising principles like religion and church shaped society in a unified way, but with AI there is no single central force. There are many different AIs being built with different goals and values. What happens when there isn’t one dominant AI but many competing AIs evolving quickly? What kind of world does that create?

In his response the author said that we are dealing with potentially millions or billions of new AI agents. “You’ll have a lot of religious AIs competing with each other over which AI will be the authoritative AI rabbi for which section of Judaism. And the same in Islam, and the same in Hinduism, in Buddhism, and so forth. So you’ll have competition there. And in the financial system. And we just have no idea what the outcome will be.” He said that we have thousands of years of experience with human societies, and at least we have some experience as to how these things develop. But, when it comes to AI, we have zero experience. “What happens in AI societies when millions of AIs compete with each other? We just don’t know. Now this is not something you can simulate in the AI labs.”

Story continues below this ad

Harari went on to say that in case OpenAI wanted to check the safety or the potential outcome of its latest AI model, it cannot simulate history in the laboratory. While it may be able to check for all kinds of failures in the system, it cannot predict what happens when there are millions of copies of these AIs in the world developing in unknown ways. He went on to call it the biggest social experiment in human history, of which all of us are a part, and nobody has any idea how it will develop.

In an extension of his argument, Harari used the analogy of the ongoing immigration crisis in the US, Europe and elsewhere. According to him, people are worried about immigrants for three reasons – they will take our jobs, they come with different cultural ideas, and they will change our culture. “They may have political agendas; they might try to take over the country politically. These are the three main things that people keep coming back to.” According to the author, one can think about the AI revolution as simply a wave of immigration of millions or billions of AI immigrants that will take people’s jobs and have very different cultural ideas, and that may even try to gain some kind of political power.

Festive offer

“And these AI immigrants or digital immigrants, they don’t need visas; they don’t cross a sea in some rickety boat in the middle of the night. They come at the speed of light,” he said, adding that far-right parties in Europe talk mostly about human immigrants but overlook the wave of digital immigrants that is coming to Europe. Harari feels that any country that cares about its sovereignty should care about the future of the economy and culture. “They should be far more worried about the digital immigrants than about the human immigrants.”

What does it mean to be human right now?

When the host asked the acclaimed author what it meant to be human at the moment, Harari responded by saying, “To be aware for the first time that we have real competition on the planet.” The author said that while we have been the most intelligent species by far for tens of thousands of years, now we are creating something that could compete with us in the near future.

Story continues below this ad

“The most important thing to know about AI is that it is not a tool like all previous human inventions; it is an agent. An agent in the sense that it can make decisions independently of us, it can invent new ideas, and it can learn and change by itself. All previous human inventions, you know, whether they’re printing presses or the atom bomb, they are tools that empower us,” said Harari.

AI learns from us

The host said that there is a lot of responsibility on leaders because how they act is how the AI will be. “You cannot expect to lie and cheat and have a benevolent AI.” In his response, Harari acknowledged that there is a big discussion around the world about AI alignment. He said that there are a lot of efforts focused on the idea that if we can design these AIs in a certain way, if we can teach them certain principles, they will be safe. However, there are two problems with this approach – firstly, the definition of AI is that it can learn and change by itself; secondly, if you think of AI as a child that can be educated, it surprises or horrifies you.

“The other thing is, everybody who has any knowledge of education knows that in the education of children, it matters far less what you tell them than what you do. If you tell your kids not to lie, and your kids watch you lying to other people, they will copy your behaviour, not your instructions.” Similarly, Harari explained that if AIs that are being educated are given access to a world where they see humans behave, even some of the most powerful humans, including their makers, lying, then the AIs will copy that behaviour. “People who think that I can run this huge AI corporation, and while I’m lying, I will teach my AIs not to lie; it will not work. It will copy your behaviour,” he said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *