Yudowsky’s Vision of the OpenAI Apocalypse Might Be Real

According to him, while there had been researchers working around AI safety in the beginning, they quit OpenAI to form the rival startup, Anthropic AI.
Listen to this story

It is hard to reconcile smart people with their absurd beliefs at times. Like noted physician and geneticist Dr Francis Collins , the former Director of the National Human Genome Research Institute who believes that Jesus rose from the dead. Or, even AI theorist and researcher Eliezer Yudkowsky who believes that the current track for AI is one that will inevitably end with nothing less than the death and destruction of humanity.

Today, Yudkowsky sounds spooked but there was a time when he actually supported the idea of OpenAI .

Yudkowsky’s Influence on OpenAI

In the past few months, the high stakes surrounding the future of AI have soured relations between OpenAI chief Sam Altman and quite a few personalities who are firmly on the other side of hasty AI development. Since the early 2000s, Yudkowsky has been a proponent of AI safety , reiterating his theory that AGI that is ‘unaligned’ with human values will be enough to wipe humanity.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In fact, Yudkowsky’s views around prioritising AI safety were serious enough to convince Musk who went on to co-found OpenAI as a non-profit along with Altman in 2015. Yudkowsky’s aim to create a safer AI had seeped in as one of OpenAI’s biggest objectives then.

However, as OpenAI assumed its current identity as a for-profit model and solidified a multi-year deal with Microsoft , releasing faster and more powerful AI models like GPT-4 , Yudkowsky has sunk into despair .


Download our Mobile App



Yudkowsky’s loss of hope

Last week, he authored a piece in Time magazine after the open letter calling for a six-month pause on the training of AI systems more powerful than GPT-4 came to light. Yudkowsky asked for one thing— to shut everything down .

The article was as extreme as Yudkowsky’s own thoughts. “Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike,” he said.

However, if one has been following Yudkowsky’s work, his words align perfectly with his theory . He appeared on Lex Fridman’s podcast a couple of days ago explaining that the alignment problem isn’t something that we will get multiple attempts to solve. “The problem is that we do not get another 50 years to try and realise that the entire thing is going to be way more difficult than you realised in the start. Because the first time you fail at aligning something much smarter than you are, you die,” he stated.

In a recent blog , Yudkowsky revealed that he had a number of friends who had spoken to him about how OpenAI researchers themselves were worried but were forced to keep mum in front of the public. According to him, while there had been researchers working around AI safety in the beginning, they quit OpenAI to form the rival startup, Anthropic AI . Anthropic’s new chatbot, ‘ Claude ’, promises to be using technology that is more ‘steerable’.

OpenAI’s non-alignment with human intent

However, the question remains: Is Yudkowsky’s doomsday talk too morbid and baseless ? Apparently, he’s not entirely wrong. A blog posted by OpenAI in August last year about their approach to alignment repeats Yudkowsky verbatim, albeit in a quieter manner.

Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together,” it stated. The blog further explained that OpenAI’s systems were dependent for alignment on what else but AI. These included training systems using human feedback, human evaluation and training AI systems themselves to do alignment research.

The blog is as frank as Altman has always been about the flawed systems his company releases . It simply states, “There is currently no known indefinitely scalable solution to the alignment problem. As AI progress continues, we expect to encounter a number of new alignment problems that we don’t observe yet in current systems. Some of these problems we anticipate now and some of them will be entirely new”.

Altman addressed Yudkowsky’s worries on Fridman’s podcast when he appeared a few days ago. “A lot of the safety work in deep learning systems was done years back and given how things have changed and what we know enough, it is not enough,” he said. Altman also admitted that Yudkowsky wasn’t wrong . “We need to significantly ramp up the technical work around alignment and we have the tools to do that now,” he stated.

When inquired about how much truth there was to Yudkowsky’s fears that humanity may come to an end, Altman said that there was, in fact, a “small chance” that it could happen. But Altman is too optimistic about the future to be nervous. “There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted in December last year.

If we were to go by Yudkowsky’s claims, it is already too late . But, does he see the beauty in an advanced AI system like GPT-4 ? “I do see the beauty but only inside a screaming horror,” he said .

Sign up for The AI Forum for India

Analytics India Magazine is excited to announce the launch of AI Forum for India – a community, created in association with NVIDIA, aimed at fostering collaboration and growth within the artificial intelligence (AI) industry in India.

Poulomi Chatterjee
Poulomi is a Technology Journalist with Analytics India Magazine. Her fascination with tech and eagerness to dive into new areas led her to the dynamic world of AI and data analytics.

Our Upcoming Events

Regular Passes expiring on Friday
27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Retail Business through Generative AI

Today, retail technology is developing at a fast pace – whether it is business transformation or even exploring emerging tech (AR/VR and metaverse etc.) to give customers a more experiential journey. Businesses are innovating not only to remain relevant, but also, ahead. Some are really shaping the future of omni-channel retail by predicting customer expectations and market trends.

Cerebras Wants What NVIDIA Has

While OpenAI apparently utilised 10,000 NVIDIA GPUs to train ChatGPT, Cerebras claims to have trained their models to the highest accuracy for a given compute budget.