Listen to this story
|
Calculators are back in the news. Why, though? An image of maths professors protesting against calculators is circulating on the internet, making a case for dismissing the petition to slow down the training of AI systems more powerful than GPT-4. While many experts have already voiced their opinion on the entire issue, the question is: Is the calculator analogy fair here? Let’s find out.
Two tales of resistance
In 1986, a group of mathematics teachers took to the streets to protest against a policy proposed by the National Council of Teachers of Mathematics . The contested policy recommended the integration of calculators into the school mathematics curriculum at all grade levels for classwork, homework, and evaluation. “At each grade level, every student should be taught how and when to use the calculator,” argued the policy.

To the teachers fighting against it, such an implementation would hinder students’ ability to learn basic mathematical concepts.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
A similar thing is playing out today as well. A group of AI researchers and critics, including Gary Marcus, Elon Musk, and Demis Hassabis, among others, have signed an open letter seeking a pause on training AI beyond the size of GPT-4 for the next six months. The letter, which has garnered about 1,400+ signatures to-date, is focused on taking AI research and deployment towards making the state-of-the-art AI system more accurate, safe, transparent, and trustworthy.
“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” reads the letter.
Download our Mobile App
Calculators vs GPT-4: Not a Fair Comparison
As much as the two scenarios look alike, there is a fundamental difference here. A calculator uses logic gates to perform arithmetic and logic operations with far more accuracy and speed than what a human mind can process. The resistance against it was not for the growth of electronics research as such, but its implementation, that too in one specific segment (namely in schools).
Neural networks like GPT-4, on the other hand, is this giant web of interconnected nodes that work together to process and analyse enormous amounts of data in order to “generate” information. Mind you, it is not “automation” as we know it, where straightforward logic is implemented to bring consistent results.
These networks “produce” content by seeing patterns in the data – a complete “garbage in, garbage out” system. The unverified, unregulated systems have been engaging in spreading misinformation, peddling hate speech, showing bias and discrimination, as well as posing several security threats to organisations’ and individuals’ data alike.

Source: The Intercept
Not just calculators, historically, all disruptive technologies have been met with resistance. An article published in 1981 in the Canadian English daily Ottawa Citizen declared that if teachers failed to resist the increasing penetration of computers in the classroom, literacy could be in danger of disappearing within a decade.
Meanwhile, it is also important to note that AI has been here for a while. The search we use, the recommendation engines to different platforms, analytics platforms, and many other areas are all based on AI. This way, AI existed before neural networks came into the picture as well. However, what’s new with today’s AI is that rather than simply analysing or recommending content on the web, it draws from this ocean of data to “create” its content.
Beyond a Simple Calculator
In this regard, it is understandable if a correlation is made between maths professors’ concerns versus the general insecurity around implementing ChatGPT in school education today. But, correlating it to the concerns raised by researchers in today’s AI models seems completely bogus.
People arguing for AI rights based on complex text processing algorithms need to ask whether they would assign the same rights to calculators, smart watches, and the internet.
— Gary Marcus (@GaryMarcus) March 25, 2023
“I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint. https://t.co/GvmBWRhhqm
Marcus also essentially argues the same thing. The fight is not because “we know how it works and we need to assess the pros and cons,” but precisely because “we don’t know how it works, and the ones involved in it are not ready to share it”. Hence, this is a fight coming not from knowledge, but from ignorance.
The actors involved in the fight are also different. A rare occasion in all these years of technological development, it seems that the power and privilege to speak is held with all those involved in the development and deployment of this technology. Not only to resist, but also comply. They form the higher order speaking for humanity, relegating corporations, stakeholders, and government bodies to a secondary place.
So, unlike calculators, the petition circulation does not exist at the policy level, but at the research level. Ensuring transparency in training data, evaluating methods for optimal outcomes prior to public release, and implementing independent model reviews are crucial considerations to address before releasing them to the public.