A critical look at the Artificial Intelligence’s (AI) impact on science from different perspective and actors – from public funders to private high-tech institutions – reveals a shared concern for the lack of transparency and cooperation in creating a more human-centred approach that would deliver on the promise of science as a global public good.
‘The question is no longer if AI is changing science, but how,” Mathieu Denis opened the floor at this year’s Digital with Purpose Global Summit with a panel session on the impacts of AI on how science is done and organized.
Less than a year since the initial public release of ChatGPT 4, the interest in the subject of AI, as well as its application across the whole science production cycle, has exploded. Yamine Ait-Ameur, Head of the Digital and Mathematical Department of the French National Research Agency (ANR), sees this interest across almost all disciplines. And while the Agency is not using AI tools to evaluate research proposals, they are well aware that they cannot possibly impose similar AI use restrictions on others in their scientific work.
While the use of AI in science raises many questions and sometime doubts, there is also a lot of excitement about its promises. The potential is there, if we put appropriate structures in place. Ricardo Batista Leite, CEO of an AI for health research collaborative I-DAIR, recounts lessons from the past when disruptive technologies, applied to broken systems, created more brokenness. AI technologies can contribute to public well-being – if we specifically design them to do so from the beginning.
The current wave of AI development, however, is driven almost exclusively by the private sector, with resources that far surpass any public investments. And it becomes impossible to talk about co-designing a more responsible, inclusive AI without bridging the public and private divide in research and development.
Christina Yan Zhang, the CEO of the Metaverse Institute, is a big believer in public-private cooperation in science and technology. She agrees that human well-being must be put at the core of technology development. In the current scientific system, researchers are forced to pursue metrics such as journal citations, instead of real impact.
It is not just that, Yamine Ait-Ameur adds. There is another key challenge to the use of AI in science. AI tools can often produce results and outcomes that are better than those of humans’. AlphaFold, an AI deep-learning system trained to predict protein structure, for example, is already outperforming human-powered methods. But we cannot reliably replicate, verify and explain its results. As long as we cannot understand the processes happening in the AI “black boxes”, the use of AI in science will raise huge technical and ethical problems.
The audience at the panel session shared the sentiment that big shifts we see in science now are only the beginning: “It’s the mid-19th century at the onset of Industrial Revolution. Are we trying to adapt the feudal system, or are we analyzing the emerging period?”
Ricardo Batista Leite agrees. “We’ll look back at this moment in time and ask ourselves whether we’ve done the right thing. We had an opportunity to turn the tide,” he concluded.