Listen to this story
UC Berkeley has released a dialogue model for research purposes named Koala.
With results based on a user study the developers claimed that Koala is adept at responding to an array of queries posed by the users. The blogpost also bragged that generated results are at par with ChatGPT in half of the cases, while they mostly exceed Stanford built Alpaca. The researchers have released a web demo for public use.
Koala has been trained by fine-tuning Meta’s LLaMA on dialogue data which was scraped from the web — with a particular focus on responses to queries from other large language models like ChatGPT. The makers chose to scrape a high-quality dataset, instead of maximising the size of the dataset.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
In the training of the model, 60,000 dialogues, publicly shared by users on ShareGPT, were collected using APIs. However, redundant and non-english dialogues were eliminated, shrinking the data size to approximately 30,000 dialogues.
ChatGPT and human responses were also used from the HC3 english dataset, which amounted to 87,000 question-answer examples.
Download our Mobile App
Open source data used to train Alpaca , components from OIG dataset ANthropic HH dataset, OpenAI WebGPT’s dataset, and OpenAI summarisation dataset were used to train the model.
One of the associate professors linked with Koala also tweeted saying, “This has some interesting implications for how powerful LLMs can be trained on a budget (in terms of weights and compute)”.
“I think this is really interesting, because this further supports possibility that in the future very capable LLMs could be “privately owned” (vs hosted and only accessed via APIs),” he added.