Listen to this story
|
In the past, we have all witnessed the trials and tribulations of human coders struggling to code and get the job done without a tussle. But now, picture a world where machines—thanks to the advent of foundational models (GPTx)—are self-sufficient in mastering the art of coding, eliminating bugs, and minimising downtime.
Guess what? It’s already happening. Here’s Auto-GPT , an open-source, self learning model that has the capabilities of GPT-4, which can develop and manage outputs and more. So, should you be worried?
Advocating for the model, Andrej Karpathy , the former director of Tesla— who recently returned to OpenAI —believes that the “ next frontier of prompt engineering are AutoGPTs ”. Karpathy said so while tweeting about the latest version of Auto-GPT, which can write its own code using GPT-4 and execute python scripts.’ (It also has a voice!)
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Founded by
Significant Gravitas
, a games development company, the Autonomous GPT-4 experiment allows itself to tamp down the bugs, develop and self-improve. The open-sourced model has managed to woo LLM enthusiasts and it is being called a direct disruptive competitor of OpenAI’s flagship model, ‘ChatGPT’.
The model’s developer,
Toran Bruce Richards
, believes that Auto-GPT has the potential to save humanity from mass job loss caused by automation from closed-source AI. If everyone has access to their own team of autonomous agents, everyone is enabled and complete. Though the model is currently dependent on GPT3 and
GPT4
, the researchers are looking into implementing
GPT4All
. Ultimately, one won’t need to read the source code of an LLM to benefit from this.
Karpathy Strikes
Karpathy shared a fascinating insight on the model. He said, unlike humans, GPTs are completely unaware of their own strengths and limitations, including their finite context window and limited mental maths abilities. This can result in occasional unpredictable outcomes. However, by stringing together GPT calls in loops, agents can be created that can perceive, think, and act towards goals defined in English prompts.
Download our Mobile App
For feedback and learning, Karpathy suggested a “reflect” phase, where outcomes are evaluated, rollouts are saved to memory, and loaded to prompts for few-shot learning. This “meta-learning” few-shot path allows for learning on whatever can be crammed into the context window. However, the gradient-based learning path is less straightforward due to a lack of off-the-shelf APIs—such as LoRA finetunes , supervised fine tuning (SFT) and reinforcement learning from human feedback (RLHF) style—which prevent fine tuning on large amounts of experience.
Karpathy believes that much like employees coalescing into organisations to specialise and parallelise work towards shared goals, AutoGPTs might evolve to become AutoOrgs with AutoCEO, AutoCFO, AutoICs , and more.
Embracing AutoGPTs
Within a week of its release, the Auto-GPT repository has already gained popularity with over 8,000 stars . Alongside the release, a flurry of discussions among developer communities were also sparked. While some have lauded its capabilities, others have pointed out that it still requires human intervention for debugging. One user even drew parallels between the model’s coding process and the traditional practice of rubber duck debugging .
Reddit users have offered varied perspectives on the matter. Some have expressed hope that the base models will not be made available to the general public, citing concerns of potential misuse. Conversely, others have argued that not releasing it would make the AI even more dangerous. A potential downside of keeping all development behind closed doors is that the AI could be commandeered by select individuals to monitor and regulate every action of the populace.
A possible solution suggested by a commentator is to make the model available to the public, accompanied by the necessary tools and resources to ensure responsible experimentation. This would allow for proactive measures to be taken by ethical researchers to counter any rogue AI scenarios that may arise. In essence, the commentator expressed the sentiment that “the only way to thwart a malicious AI is through a benevolent AI”.
If you are unable to set up AutoGPT yourself, but want to try it, this is the thread for you . Post your prompts below and Toran Richards will try out some of the best ones and record the output for you!