Listen to this story
Sam Altman, in his latest podcast with Lex Fridman, hypothesised the idea of ‘making humans super great’.
In response to Fridman’s apprehension about AI being integrated into our world through prompt chains and number of interactions, Altman said, “I am excited about a world where AI is an extension of human will, and an amplifier of our abilities. . . maybe we never build AGI but we just make humans super great. Still a huge win”.
This is an interesting proposition that we must sincerely ponder upon. Generative AI is still in its infancy, however the idea of the existing technology amounting to AGI in the near future has already been dismissed by a number of experts.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
But numerous applications offer an interesting insight and peek into what might be. Generative AI today—with its limited yet mind blowing variety of use cases —can not only help humanity become more productive or creative, but also more capable.
This would in turn help in wealth creation and an overhaul in human intelligence and productivity—an ‘AI human’ if you will; which in turn would help train these systems through RLHF to upgrade itself. Thus, becoming a link between AI and AGI and ASI after that.
Download our Mobile App
Let’s think philosophically for a bit, any change in life goes through three stages— denial , acceptance and adoption . Not unlike life itself, the same is the case for artificial intelligence.
Humans are, first and foremost, sceptical towards a leap in technology, later developing curiosity and then, finally, learning to adopt it. For instance, when a proposed policy recommended the integration of calculators into the school mathematics curriculum at all grade levels for classwork, homework, and evaluation, teachers protested profusely against it, stating that “such an implementation would hinder students’ ability to learn basic mathematical concepts”.
We are at one such juncture in human existence where many are wary of the innovation in generative AI. People fear being replaced by such systems, and their job roles being rendered moot. The situation is especially grim considering the Goldman Sachs report that suggests that AI could replace up to 300 million workers around the world.
However, one could question the ludicrous nature of such reports and question the theories that these reports posit to prove their hypothesis, such as—“If generative AI delivers on its promised capabilities, the labour market could face significant disruption”.
There are many who have likewise posed questions and raised existential alarms. A group of stakeholders and notable names in AI—Gary Marcus, Elon Musk, Apple Co-founder Steve Wozniak, among several others have called for a temporary pause on training systems beyond GPT-4 for at least six months because they believe that, “AI systems with human-competitive intelligence can pose profound risks to society and humanity”. This group also called upon AI labs and independent experts to “jointly develop and implement a set of shared safety protocols for advanced AI design and development”. They further asked for these protocols to ensure the safety of such systems “beyond a reasonable doubt”.
It is to be noted that the stakeholders clarified that, “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. But some experts, like Meta AI chief Yann LeCun , do not agree with the components of the letter and have not agreed to signing the petition to pause all training beyond GPT-4.
Many have since come to accept the reverberation of generative AI while some are trying to innovate and push the existing limits.
It is typical at the workplace today to use AI technologies—ChatGPT and alike—to automate repetitive tasks, improve productivity, and enhance decision-making. By using AI technologies to augment their cognitive and physical capabilities, individuals can work more efficiently and effectively.
Founders are attempting to incorporate massive language models into a variety of other industries by developing tools to aid professionals in writing, coding, designing, producing media, and, now in law, finance and more. ‘Copilot for lawyers’, ‘Copilot for doctors’, ‘Copilot for designers’, and several others, are mushrooming all around us. The newest entrant within the financial space is BloombergGPT . Users are also marvelling at ChatGPT API and various plugins that have use cases extending beyond imagination.
Andrej Karpathy foresaw the development of ‘Software 2.0’ using neural networks in 2017, as pointed out in a report by Sequoia Capital . The report suggests we might witness the same innovation in software that he foresaw—a ‘Developer Tools 2.0’—in the future. Then, the real question is: How long before an ‘AI teammate’ after that ?
There are users who have truly adopted these technologies and are pushing the limits to break into something that’s more than what exists already.
Big tech companies, such as Microsoft-backed ‘OpenAI’, Google and Meta, are contributing to the generative AI push. NVIDIA is also amongst one of the top contributors and beneficiaries in this space, with all its chips and GPUs being used to train these models.
Firms such as China-based ‘Baidu’, with its latest chatbot are pushing the boundaries. The ‘ErnieBot’ contains 260 billion parameters, representing a 50% increase compared to ChatGPT’s parameters.
The tech giants along with the users who have accepted and adopted it stand the chance to benefit from controlled, mindful and beneficial application of this technology, which has unimaginable potential but risks.
Regardless of where individuals or organisations fall on the AI adoption lifecycle, whether in denial, acceptance, or full adoption, humans will remain a critical part of the loop . ChatGPT is a great example of this, where humans can improve the system and themselves—simultaneously through RHLF—becoming AI humans who assist in developing a reliable AGI for the future.
“The thing is not that it’s a system that kind of goes off and does its own thing. But, it’s this tool that humans are using in this feedback loop. . . [It is] helpful for us for a bunch of reasons [because] we get to learn more about trajectories through multiple iterations,” Altman said, during his podcast with Fridman.