Listen to this story
Today, we are at an interesting phase in the AI cycle. All enterprises, irrespective of the industry they operate in, want a piece of the generative AI cake. A few days ago, Boris Power, a member of the technical staff at OpenAI, cherry picked India saying it would be the country where GPT-4 will have the most impact. The Indian edtech sector has already jumped onto the GPT-4 bandwagon with great enthusiasm.
Recently, edtech firm Khan Academy announced a GPT-4-powered AI tutor and classroom assistant called Khanmigo . The non-profit organisation believes GPT-4 can be leveraged to provide tailored learning experiences to students depending on their learning and understanding capabilities.
Khan Academy, which operates in India too, will soon make its AI tutor available for all. upGrad have already launched a GPT powered interview assessment tool. Other edtech startups in India also seem to be treading a similar path. However, the edtech sector in India itself still remains unregulated . With the advent of GPT-4, the need for regulation has been further heightened.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The edtech sector in India flourished during the pandemic. Although it has facilitated wider access to education regardless of geographical and linguistic limitations, the edtech industry has also sparked concerns regarding unethical sales practices, insufficient grievance redressal mechanisms and excessive commercialisation of the education sector.
Download our Mobile App
(Source: Ed tech market size in India 2025 projection )
For quite some time, there has been a growing demand among various stakeholders for the implementation of a technology education policy in India. Now, with the increasingly fast-paced development in AI, the need is only more aggravated.
“The increasing use of AI by edtech firms in India does raise the need for regulation in the sector. AI-based technologies in education have the potential to transform the way students learn, but they also raise concerns about privacy, security, bias, and ethical considerations,” Advocate Satya Muley , founder at Satya Muley & Co., told AIM.
In this regard, the Internet and Mobile Association of India ( IAMAI ) has formed the India Edtech Consortium ( IEC ), an independent and self-governing body to promote self-regulation among edtech companies. The IEC will create a Code of Conduct for edtech firms to follow and establish a grievance redressal system to address consumer complaints.
“However, the same did not speak much about the use of AI systems by these edtech startups and ethical considerations to be followed by them. With the unprecedented growth in the AI sector and introduction of powerful tools such as GPT-4, there is a need to safeguard stakeholders from the plausible negative effects of such AI,” Aditi Seetha , programme manager at the Dialogue, a public policy think tank, told AIM.
Even though use of AI in edtech has potential, it could also lead to biases. It is highly critical that we ensure the AI models are free from any biases. ChatGPT, the wildly popular chatbot by OpenAI, was found to be biased as well, demonstrating instances of being pro-environmental and left-libertarian.
Since there is next to no knowledge about GPT-4 including the datasets it has been trained on, using it for education could be a risky proposition. “AI algorithms may easily be biased against certain groups of students, such as those from disadvantaged backgrounds or with learning disabilities, which could lead to discrimination,” Muley said.
AIM reached out to Khan Academy to know how they are planning to address these challenges but did not receive a response.
In addition, there is a need for regulation to ensure that AI-based technologies in education are transparent, explainable and accountable. “This means that students, teachers, and other stakeholders should be able to understand how these systems work, how decisions are made, and who is responsible for any errors or biases,” Muley added.
On the contrary, Girish Singhania , CEO, EduBridge believes using GPT-4 could limit the cognitive abilities of the learner. “Efforts have to be made to regulate the use of the tool and ensure that it is only employed to impart a seamless and immersive learning experience without compromising on the cognitive growth of the learner,” he told AIM.
Even though we have already seen many use cases of GPT-4 , it would be advisable for the edtech industry to move cautiously. LLMs are great at predicting the next word in a sequence, however, they don’t really understand the context. The same criticism has been levelled against LLMs by many AI experts multiple times. “LLMs have no idea of the underlying reality that language describes,” Yann LeCun, chief AI scientist at Meta, said.
The models are hallucinatory in nature. Even though OpenAI claims the GPT-4 hallucinations are far lesser compared to ChatGPT , which is built on the GPT3.5 architecture, it does still hallucinate.
“The accuracy and dependability of the auto-generated responses from these AI are not full-proof as they might showcase certain biases or even inaccurate information depending on the dataset they are fed. This could have a negative effect on students’ general well-being and even academic credentials,” Seetha said.
Need for AI ethics
The United Nations Educational Scientific and Cultural Organization ( UNESCO ) in its report suggested that there should be a regulatory framework with regards to the use of AI in education and the government must consider ethics in AI as an utmost priority.
“Since the edtech sector is growing at a fast pace, there is an even greater need to build trust around the use of AI within the public,” Seetha said.
The EU is one of the few regions that has come up with ethical guidelines around the use of AI and data in teaching and learning for educators. Similarly, in the UK, the Institute for Ethical AI in Education has published a report advising how schools and education institutions can use AI systems safely and in an ethical manner.
“In India, the Indian Council of Medical Research ( ICMR ) has recently released ethical guidelines for the application of artificial intelligence in biomedical research and healthcare, bringing in some ethical processes related to AI in healthcare. A similar approach can be adopted in the edtech sector to overcome the challenges posed by the AI usage in education,” Seetha said.
All the negatives notwithstanding Singhania also believes that AI has greatly beneficial features making regulatory guidelines essential. “This especially holds true in education, where learners can produce content in a matter of seconds without investing in the virtue of hard work to research, learn, and apply logic to come up with organic opinions,” he said.
Given the rapid advancement and hasty adoption of AI, not just in edtech, but across industries, it is imperative that India comes up with guidelines and ethical principles to mitigate the related risks that are sure to be tied in with these advantages.