Listen to this story
With the new-age engineering tools at the helm of scientific research, artificial intelligence (AI) is taking science under its wings. A report from Australia’s science agency, CSIRO, analysing the impact of AI on scientific discovery found that AI is implemented in 98% of scientific fields, and by September 2022, approximately 5.7% of all peer-reviewed research worldwide was on AI.
“AI models are starting to rapidly accelerate scientific progress and in 2022 were used to aid hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies,” read Stanford’s AI index report 2023.
Science is getting automated
The latest AI techniques have entered almost all areas of science, prompting us to ask the question: Is there anything that scientists do that can’t be automated? One such technique, known as generative modelling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data. And, importantly, this would be without any preprogrammed knowledge of what physical processes might be at work in the system under study.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
While a few years ago, the best kinds of generative modelling systems were general adversarial networks (GANs), the capabilities of transformer architectures have seemingly taken over. For instance, DeepMind’s AlphaFold and AlphaTensor – both state-of-the art AI models for scientific research – use the transformer architecture.
The transformer architecture has gained popularity in generative modelling due to its superior performance in tasks such as natural language processing and image recognition. This is mainly due to its ability to capture long-range dependencies, which allows it to effectively process and generate complex sequences of data.
Download our Mobile App
Unlike GANs, which can be prone to instability during training, transformer models are more stable and consistent in their output. Additionally, the latter is also highly adaptable and can be fine-tuned for specific tasks with ease.
As a result of advancements in AI techniques, we’ve cracked the code on the protein-folding problem, which is now being used to create malaria vaccines, address antibiotic resistance, reduce plastic waste, and push drug discovery to new heights. Moreover, we’ve shattered a 50-year-old record in matrix multiplication, giving us lightning-fast AI applications on existing hardware like never before. And that’s not all, we’ve also developed a brain-computer interface that can translate attempted speaking actions into text, giving people with paralysis the potential to communicate effectively.
Researchers believe that we have only so far hit the tip of the iceberg, considering the vast possibilities that AI models hold.
Meanwhile, models like Auto-GPT that are open source, self-learning models , and can write their own code, eliminate bugs, and minimise the downtime are here now. We are at the threshold of creating many autonomous scientific agents that can perceive, think, and act towards goals defined in English prompts.
AI’s influence is not just limited to the lab; it is also making strides in communication. With tools like ACCoRD , science communicators are able to choose a meaning to a concept from a diverse set of unique descriptions, removing the drudgery of taking that one best description given to all scientific terminologies.
This means that currently we should be in the age of disruptive science, right? Data suggests otherwise. A study published in Nature earlier this year found, “papers, patents, and even grant applications have become less novel relative to prior work and less likely to connect disparate areas of knowledge, both of which are precursors of innovation.”
The declining research productivity is especially observed in areas like semiconductors, pharmaceuticals, and others. And this while overall there is a positive sentiment around AI. It only goes to see if technology has really made a difference to the growth of scientific research.
AI – the best scientist?
But, before we jump onto the ‘AI for science’ bandwagon, there are also huge challenges to be ready for. Known as ‘reproducibility crisis’, the issue came to light when Kapoor and Narayanan analysed 20 reviews in 17 research fields and counted 329 research papers whose results could not be fully replicated because of problems in how machine learning was applied. That is, it was impossible to verify these results, even by duplicating the experiment under the same conditions with the same data set.
This is happening while scientific research is already facing the heat for failing to be reproducible . So, it might be the case that AI is only worsening it. Due to something called ‘data leakage,’ while the model will do well during the training and testing phase, it will fail when applied to real-world applications with a completely new dataset.
The issue of reproducibility also aggravates because of the absence of sufficient information on the code used for generating the results. Or the problem occurs because of the bias in data that is fed into these AI systems – for example, if only a specific age group is taken in the data, or a particular race or gender is overrepresented. Hence, as much as AI provides the scope for propelling research, it also brings its own set of problems which are completely against the ‘objective’ and ‘verifiable’ ethos of science.