This is an outtake from our new newsletter Observations. Sign up to get the full newsletter, including behind-the-scenes, reading lists, and more straight to your inbox.
Max Tegmark, the Swedish physicist, machine learning researcher, and MIT professor, has caused quite a commotion recently, with his open letter to halt AI development for six months. The letter, which was co-signed by 33 002 academics, business leaders, and online petitioners, including Elon Musk, was an attempt to give AI developers an equal opportunity to assess their impact on society without feeling the stress of their competitors getting ahead. The specific reason for having it being six months, was so that no one would feel stressed that China would get that head start.
The open letter was met with mostly ridicule in the media or, at best, a friendly shrug, as if the overall feeling was that no one really thought the halt would actually happen.
But it did propel Max Tegmark back into the zeitgeist, with several podcast interviews and a “summer talk” on Swedish national radio. Max has previously been awarded merit for his book Life 3.0, an adjacent Ted talk and a string of podcast interviews on the subject of AI risk. I’ve personally been very intrigued by his theories, and hold in high regard the intro to his book, a thrilling fictionalization of what would happen if we would achieve AGI, artificial general intelligence, or human-like intelligence. The chapter is up there with the best AI dramatizations, including the Johnny Depp movie Transcendence and the TV series Westworld. It included a riveting description of how this AI would be connected to the internet, play the stock market to earn money, and soon become the most powerful force in the world.
Together with Nick Bostrom, another Swede located in another high-end university (Oxford), Tegmark has been a leading voice in the discussion on AI alignment – the need for artificial intelligence to be in line with human goals and values. Bostrom, the author of the book Superintelligence, came to fame with the simulation argument, an hypothesis “that proposes all current existence, including the Earth and the rest of the universe, could be artificial”, that coincidently was published the same year as the first Matrix movie came out. Bostrom is also the creator of the “paper clip maximizer theory”, a simplified thought experiment where an AGI is instructed to produce as many paper clips as possible, and, without regard for human life, ends up optimising this goal to the extent that all human life will become extinct.
I have been fascinated by the thinking of both Tegmark and Bostrom, but in light of the past year’s development of large language models like ChatGPT, it has become increasingly obvious that both of them are based on theoretical assumptions that are not yet — and perhaps never will be — proven by science and reality.
In short: their ideas are fiction, not fact.
At least if you believe Marc Andreessen, the venture capitalist behind Andreessen Horowitz and arguably one of the founding fathers of the modern internet. Andreessen came out with an essay called ”Why AI will change the world” and did a seemingly endless round of podcast interviews to spread his ideas further.
Marc Andreessen argues that the doomsday scenarios around AGI are completely overblown, and are not based on any engineering or scientific reality. The current generative AIs are simply auto-corrects on steroids, simple language models that don’t want anything more than to please whoever is promoting their instructions.
But on the issue of whether there is some accidental sentience being constructed out of these models, Andreessen’s is very clear. There is no consciousness hidden in these machines. There is no ”there” there.
He then goes into listing the very reasons AI will be beneficial for humanity, from AI teachers for every child on earth and exponential developments in science and medicine, to the fact that ”every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful”.
”every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful”
Listening to Max Tegmark and Marc Andreessen back to back has been a fascinating view of the current debate around artificial intelligence, and I encourage anyone curious about the topic to dive deep into both sides of the argument.
My bet: it feels to me like Andreessen is winning. But that could very well be a kind of confirmation bias. We are all influenced by our personal vantage points. Max Tegmark, for instance, explains how his recent fatherhood has made him more aware of his own mortality, which somehow seems to fuel his AI doomsday narrative. I, on the other hand, am always leaning towards the tech optimist’s viewpoint, and am eager to experience the future where my digital self can start doing work for me (see last week’s column).
Or in the words of Sam Harris, a well-documented AI worrier, who at the end of his interview with Marc Andreessen said: “I am rooting for you. I want your version of the future to become true”.