UC Berkeley Releases Dialogue Model Koala for Research Purpose

Listen to this story

UC Berkeley has released a dialogue model for research purposes named Koala.

With results based on a user study the developers claimed that Koala is adept at responding to an array of queries posed by the users. The blogpost also bragged that generated results are at par with ChatGPT in half of the cases, while they mostly exceed Stanford built Alpaca. The researchers have released a web demo for public use.

Koala has been trained by fine-tuning Meta’s LLaMA on dialogue data which was scraped from the web — with a particular focus on responses to queries from other large language models like ChatGPT. The makers chose to scrape a high-quality dataset, instead of maximising the size of the dataset.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In the training of the model, 60,000 dialogues, publicly shared by users on ShareGPT, were collected using APIs. However, redundant and non-english dialogues were eliminated, shrinking the data size to approximately 30,000 dialogues.

ChatGPT and human responses were also used from the HC3 english dataset, which amounted to 87,000 question-answer examples.


Download our Mobile App



Open source data used to train Alpaca , components from OIG dataset ANthropic HH dataset, OpenAI WebGPT’s dataset, and OpenAI summarisation dataset were used to train the model.

One of the associate professors linked with Koala also tweeted saying, “This has some interesting implications for how powerful LLMs can be trained on a budget (in terms of weights and compute)”.

“I think this is really interesting, because this further supports possibility that in the future very capable LLMs could be “privately owned” (vs hosted and only accessed via APIs),” he added.

Sign up for The AI Forum for India

Analytics India Magazine is excited to announce the launch of AI Forum for India – a community, created in association with NVIDIA, aimed at fostering collaboration and growth within the artificial intelligence (AI) industry in India.

Shyam Nandan Upadhyay
Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.

Our Upcoming Events

Regular Passes expiring on Friday
27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023

21 Jul, 2023 | New York
MachineCon USA 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Retail Business through Generative AI

Today, retail technology is developing at a fast pace – whether it is business transformation or even exploring emerging tech (AR/VR and metaverse etc.) to give customers a more experiential journey. Businesses are innovating not only to remain relevant, but also, ahead. Some are really shaping the future of omni-channel retail by predicting customer expectations and market trends.

Cerebras Wants What NVIDIA Has

While OpenAI apparently utilised 10,000 NVIDIA GPUs to train ChatGPT, Cerebras claims to have trained their models to the highest accuracy for a given compute budget.