Listen to this story
|
Bill Gates called GPT the most-important advancement in tech since 1980. Last month, OpenAI released GPT-4, the most-advanced large language model to date. Many believe GPT-4 is the tipping point for artificial general intelligence (AGI), a bigger goal that OpenAI , the creator of GPT models, is hell-bent on achieving.
However, experts in the AI community have expressed concerns about the rapid and significant developments the field has seen in recent months. In this light, recently, a group of AI experts and critics, which includes tech heavyweights like Elon Musk, Gary Marcus, and Steve Wozniak, among others, have signed an open letter calling all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.
The letter, which has garnered more than 1,100 signatures to-date, argues that without proper safeguards and checks and balances in place, the unparalleled advancements in AI could pose an existential threat to humanity. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it states.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
However, not everyone in the community agrees with the open letter.
On Halting AI research
Andrew Ng, founder and CEO of Landing AI, called it a terrible idea. He believes the technology is already making an impact in education, healthcare, and food, which in turn, will help many people.
Download our Mobile App
Indeed, GPT-4 has some amazing use cases. In India, the technology has been used to build KissanGPT , a chatbot to help farmers resolve their agricultural queries. Recently, a dog owner used GPT-4 to save his canine’s life .
“Banning matrix multiplications for six months is not a solution to the dangers AI potentially poses while ignoring vast benefits. As with any technology, I believe in humanity’s ability to embrace positive benefits while figuring out safety guardrails. No need to stop progress,” Anima Anandkumar, senior director of AI research at NVIDIA, said in a tweet.
Similarly, Yann LeCun, chief AI scientist at Meta, who has been a critic of the technology, has also refrained from signing the open letter. In a tweet, LeCun disclosed that he disagrees with the whole premise of the movement.
The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type.
— Yann LeCun (@ylecun) March 30, 2023
Imagine what could happen if commoners get access to books!
They could read the Bible for themselves and society would be destroyed.
Furthermore, some have argued that the open letter is contributing to the current hype surrounding AI and its transformative potential in the business world. Emily M Bender, a professor at the University of Washington, believes that the letter will only serve to aid technology developers in marketing their products.
“This open letter — ironically, but unsurprisingly — further fuels the AI hype and makes it harder to tackle the real, already occurring AI harms,” Arvind Narayanan, professor of computer science at Princeton University, said.
Other Issues with the Open Letter
Besides pressing for an outright pause, the open letter raises the question: Should we automate all the jobs, including the fulfilling ones? Narayanan points out that the idea that LLMs will soon replace humans is outrightly ridiculous. “Of course, there will be effects on labour and we should plan for that,” he said.
While AI may replace some jobs, it might not be the case with the current state of LLMs. This is because these models do a wonderful job in guessing the next possible word in a sentence, however, they don’t really understand the context.

At best, LLMs could serve as valuable assisting tools in various professions. For instance, doctors may use them to assist with diagnoses. Nevertheless, they cannot truly replace the expertise and judgement of doctors.
Further, the letter also raises the question: Should we risk the loss of control of our civilization? This concern may seem implausible, as it implies an apocalyptic or Judgement Day -like scenario often depicted in science-fiction films.

For it to happen, AI must achieve superintelligence at a significant value, however, experts, some of whom are a signatory to the letter, have said LLMs may not possibly lead to AGI. “The exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more, making it harder to address risks,” Narayanan said.
100% agree with this. https://t.co/a0TD2RG1yZ
— Gary Marcus (@GaryMarcus) March 30, 2023
What we think
Without a doubt, the fast-track innovations happening in AI are thrilling and pretty scary at the same time. Moreover, many big-tech firms having dismantled their responsible AI teams while also racing to release AI models with lesser testing and researchers being put under pressure to innovate can be a deadly cocktail for the world.
Hence, we believe it is critical that AI development and deployment is done in a responsible and ethical manner and the signatories’ intentions are the same. However, the open letter was poorly drafted, with some inappropriate choice of words.

Read more: AI May Get Scarier, Govts Must Tame it in Time
Besides, the letter fails to adequately detail the risks associated with the current state of AI technology. The call to immediately halt AI research is also controversial. Nonetheless, a larger discussion around the ethical use of AI between developers and policymakers is a welcome move.