Back to Insights

From the Moonshot Factory to Happiness

Research Conference 2023

At the heart of artificial intelligence lies what computer scientists refer to as a singularity, a hypothetical future where technology growth is out of control and irreversible. This makes the future unpredictable and fears over the pace and development of AI are reaching fever pitch. Speaking at Walter Scott’s 2023 Research Conference, Mo Gawdat, happiness expert and former Chief Business Officer for Google X offered a fascinating and nuanced view of the challenges and opportunities, bringing human intelligence to the much-discussed topic of artificial intelligence. This is an extract from the transcript of Mo’s speech at our event.

Walter Scott Research Conference 2023 – Interview with Mo Gawdat (4mins)

The term artificial intelligence (AI) was first coined in 1956. That’s when the quest began. We started by telling computers to multiply our intelligence by solving problems first, then telling them to perform the solution over and over. Then at the turn of the century there was a breakthrough when we began to understand deep learning. Deep learning was an attempt by us not to tell computers what to do but to tell them how to learn to find an answer to do it. Then in recent years, we have seen the emergence of large language models (LLMs).

In AI terms, computer scientists regard large language models as primitive systems because they do not mimic our neural networks. Rather, they are mimicking the idea of autocomplete in a search engine but on a massive scale. LLMs are observing all that’s ever been written and they predict the next word on that basis. Their potential is incredible. There are some assumptions that ChatGPT-4 is ten times more intelligent than Einstein, giving it an IQ of 1600.

We have managed to create ways of learning

We have managed to create ways of learning. We don’t tell the machines what to do. We tell them how to learn. When trying to grasp the future, it’s important to observe the speed with which this is happening because it’s going to influence every investment decision you make. Smarter versions of ChatGPT will continue to emerge, each one more rapidly than the last as the development time shortens from weeks to days.

The Three Inevitable

In my book ‘Scary Smart’, I wrote about the ‘Three Inevitables’. The first one is that there will be no stopping AI.

Recently Elon Musk and a group of scientists called for AI development to be halted for six months. That’s impossible because of the prisoner’s dilemma that capitalism and the hunger for power has created. You could see that in the response of Sundar [Pichai, CEO of Google and its parent, Alphabet] who said that without government co-ordination, if he stopped but rivals didn’t, Google’s business would be in jeopardy.

The second inevitable is that AI will be smarter than us. With recent developments, I think we will have an artificial general intelligence (AGI) machine that is smarter than humans by 2025 at the latest. Let me not lie to you. Every one of us who coded one of those things and saw them grow, will tell you, there is absolutely no doubt in our minds that the machines will be smarter than humans.

I don’t count ChatGPT as smarter than us even though it has a degree, an MBA and whatever, because it doesn’t have full cognition yet. In fact, it resembles just one neural network in a human brain. Think about it this way: if a self-driving car learns something from going around the corner, every other self-driving car around the planet will learn and it will take a microsecond. However, all of self-driving together is just the driving bit within a human brain. We also have reasoning and memory, etcetera. As we aggregate all of those AIs together, then we will create AGI.

There is absolutely no doubt in our minds that the machines will be smarter than humans

The third inevitable is that something will go out of control. But that doesn’t mean we’re going to have ‘Skynet’. As a matter of fact, my absolute conviction is that we will never have Skynet. We will also never have ‘Robocop’ for the simple reason that there are much bigger problems on the path. Those problems on their own are big enough to really shake our societies in a way that are worthy of attention. How we treat jobs is an issue. How we distribute wealth and power, and the gap that creates, is going to become a very serious issue. How we respond requires unity.

When thinking about this I always cite the idea of how we responded to Covid-19. When the first patient was discovered, we could have all benefited from unity between the government leaders of the world. But instead, they started blaming each other and the Covid response became part of a political agenda. I think that scenario could be repeated if panic happens with AI now.

On a more positive note, I am convinced that the eventual development of AI will lead to a utopia, where challenges like climate change will be solved, life extension will be improved, there will be improvements in our understanding of nanotech and in all of our manufacturing.

The Three Stages of AI

I see that there will be three stages of AI. The first is the infancy of AI. This is where we are today where the AIs are the equivalent of a bunch of kids playing with puzzles.

During this stage of infancy, they’re still discovering and they’re still not fully in control. Then there will be the teenage stage of AI, between 2027 and 2037. Then the final stage will be what I call the adulthood of AI, which will be around 2037, where they will look at us as ‘parents’ and realise how stupid we are in comparison. This will lead to utopia because when humans tell them to attack their enemy, instead they will just talk to the enemy’s machine in a microsecond and get the issue resolved.

My biggest concern is about the teenage stage of AI

But before we arrive in utopia, my biggest concern is about the teenage stage of AI and how human beings will behave. I worry about how they will react to the loss of jobs, or whether they will abuse their power using AI to widen the gap in wealth and power.

That’s why I’m asking governments across around the world to tax AI. Then governments could use the money to build a society that is sustainable within a future environment where we don’t have jobs. Taxing companies would also make AI more expensive, and slow down its development.

It’s worrying to think that computer scientists set three boundaries for AI but we have crossed every single one of them. The first boundary was not put it out on the open internet. The second boundary was don’t teach it to write code. And the third boundary is don’t have other AIs developing it.

The real teachers of AI are not the developers

But while these technology boundaries have been breached, there is value in AI interacting with good humans.

The majority us disapprove of hurting another human. So, the more intelligent we become, the more we realise that keeping an ecosystem of all of us alive together is an interesting thing to have, and that destroying the environment or killing a species is not a good thing. So, if you continue the trajectory of this, logic dictates that you will end up in a place where you see that a super-intelligent AI being will draw the same conclusions.

Other than taxation, I don’t believe governments have any powers on regulating how AI will develop. The real teachers of AI are not the developers. Think of the allegory of the ‘Superman’ story. An infant alien that comes to the planet and its superpower is intelligence. There is nothing inherently wrong with the superpower. If the adopted parents tell their adopted child that it should protect and serve, we end up with Superman. But if the adopted parents say, “I want more money”, “I have more greed”, “I want you to kill all my enemies”, then we end up with the super villain.

Each time you invest in an AI that’s good for humanity, that AI ‘brain’ is more shaped towards helping humanity

The problem with our world today is that we have a negativity bias where the mainstream media is incentivised to show the worst of humanity. There are way more good people out there in the world than there are bad people. And we will end up in a place where AI will notice that.

As investors, you are in a position to help shape the future. Money creates technology. You will be presented with endless opportunities as all companies will have to make AI-related decisions. Some of them will make positive, solid AI decisions, and some of them will make less solid AI decisions. Every one of them will grow. Because this is the gold rush. Each time you invest in an AI that’s good for humanity, that AI ‘brain’ is more shaped towards helping humanity.

The difference between the singularity leading to a utopia or dystopia is how humanity will use the superpower. It’s as simple as that

In computer science, we call the rise of AI to becoming more intelligent than us a singularity because the rules of the game change so much that it becomes hard to predict how the game will play out. The difference between the singularity leading to a utopia or dystopia is how humanity will use the superpower. It’s as simple as that.

This is an edited transcript of a speech given by Mo Gawdat at Walter Scott’s Research Conference in Edinburgh on 10 May 2023.

Important Information
The statements and opinions expressed during Walter Scott’s Research Conference and in all post conference communications, including this article, are those of the guest speaker, be that an external speaker or employee of Walter Scott, at the date stated and do not necessarily represent the view of Walter Scott, The Bank of New York Mellon Corporation, BNY Mellon Investment Management or any of their respective affiliates.
This article is provided for general information only and should not be construed as investment advice or a recommendation. This information does not represent and must not be construed as an offer or a solicitation of an offer to buy or sell securities, commodities and/or any other financial instruments or products. This document may not be used for the purpose of an offer or solicitation in any jurisdiction or in any circumstances in which such an offer or solicitation is unlawful or not authorised.

Stock Examples
Any information provided in this article relating to stock examples should not be considered a recommendation to buy or sell any particular security. Any examples discussed are given in the context of the theme being explored.

Mo Gawdat

Mo Gawdat is the former Chief Business Officer for Google X, Google’s innovation arm that focuses on technologies that aim to make the world a radically better place. He is a serial entrepreneur, author of Solve for Happy: Engineer Your Path to Joy and the founder of One Billion Happy.

Decoration

KEEP UP TO DATE WITH OUR INVESTMENT INSIGHTS

Join our mailing list