What do freedom and responsibility mean today, and why do they matter to the scientific community? With expert guests, the ISC will explore critical topics such as building trust in science, using emerging technologies responsibly, combatting mis- and dis-information, and the intersections between science and politics.
In the second episode, Lidia Borrell-Damián (Secretary General of Science Europe) and Willem Halffman (Associate Professor at Radboud University) explore the concept of scientific autonomy.
Scientists need to enjoy the freedom to conduct research without interference from political or regulatory bodies. However, funding priorities and rigid assessment systems can limit this freedom. At the same time, While crucial for impartial assessment and advancing knowledge, unchecked autonomy can lead to ethical imbalances and dangers.
Following ISC Presents on your podcast platform of choice or by visiting ISC Presents.
“The current world needs science, to develop well informed decisions. And that can only come from scientific autonomy.”
“Scientific autonomy does not mean that individual scientists can or should be able to do whatever they want.”
Marnie Chesterton (host)
Hello and welcome to this podcast series from the International Science Council, on freedom and responsibility in science.
I’m Marnie Chesterton, and in this episode, we’re looking at scientific autonomy. How can things like political interference or output metrics encroach on the freedoms of scientists? When might those freedoms compromise the responsibilities of scientists? And who gets to decide the limits of autonomy.
First of all, what is scientific autonomy? Lidia Borrell-Damián is the Secretary General of Science Europe, representing major public organisations that fund research in Europe.
Scientists have the right to conduct research in the field of their choice, there should be clear and consistent regulatory frameworks refraining from interference in the decisions of the subjects to research. I would also add that no discipline can be excluded for political reasons.
In today’s global research landscape, both these aspects of autonomy – when it comes to scientists themselves and the institutions where they work – can be infringed in many ways. This can, of course, happen directly, when governments pass laws that limit the freedoms of scientists and institutions, But it can also happen in more subtle or indirect ways
Governments, they set up their priorities, they say, well, here’s where they have money for. And that also affects the choice of the topic of a researcher because maybe a researcher would have an idea, but there is no money to develop that idea. So that person goes in a different direction because there is money to develop something else. So there is a lot of nuance in what I’m saying here
And it’s not just funding priorities that can distort research outputs – indeed the very systems that we use to evaluate research are themselves limiting the autonomy of scientists.
Many researchers find themselves constrained by rigid research assessment systems that rely on countable indicators attributed to the impact of a journal or of a certain platform. We think that the importance of a scientific paper is not where the paper is published. It is the contribution of the paper to the advancement of research. Therefore, we would like to reposition the use of quantitative indicators, and make them much less important when assessing individual researchers.
And, second, develop ways to assess other types of output beyond articles. Let’s talk about software, let’s talk about prototypes, etc., which today may not receive the attention or recognition they deserve.
So there is now a whole movement in the academic sector, as to how the scientific community thinks we need to be assessed. So it’s really a worldwide discussion on this issue.
Making research assessments broader and less focussed on metrics should lead to more autonomy for researchers. But of course, not all science happens within academia – and that brings its own challenges.
There is very little knowledge of what happens in the private sector regarding research. That is a big black box. I think companies should make an effort to make their research processes and policies transparent. Very little has been developed in terms of accountability to society. So my proposal would be here to strengthen the dialogue, public–private research investment, to agree on a set of common policies, that would be a reflection of the values that underpin research.
This last point, about accountability, applies to science everywhere – not just the private sector. Because any discussion of scientific autonomy has to recognise that it’s a double-edged sword…
So it’s not so much a balance between autonomy and scientific responsibility. As the two make each other possible, they’re actually connected to each other.
This is Willem Halffman, a sociologist of science working at Radboud University in the Netherlands. Willem points out that, on the one hand, there are lots of reasons to protect and value scientific autonomy…
So this relative independence of scientists is really important. First of all, we need impartial assessment of the safety of our products for the safety of our medicines. We also need independent scientists because we need people to warn us of dangers that might be ahead, even if we don’t like to hear it. Sometimes we also need scientists to tell us that we are wrong, that we’re doing things that are not working. And yes, if you let scientists tinker, sometimes they come up with radical new ideas and breakthroughs that in the long run can lead to products. And lastly, you could also say, well, we need this knowledge community because knowledge is a cultural good and a value in and of itself. Just like we don’t interfere too much with art, or with journalism.
But, on the other hand, autonomy that goes completely unchecked or unchallenged can be dangerous – as history has taught us…
As societies we’ve learnt, sometimes the hard way, that if you award scientists this relative independence, they don’t automatically do the right thing. Things have gone wrong in the past. Sometimes, when you let scientists decide for themselves, they will make ethical balances that we don’t agree with, for example, they might think that it’s okay to experiment on their patients. Sometimes, if you leave them to their own accord, they might invent new mechanisms of mass destruction, they might come up with dangerous new technologies. So, we want scientists to be accountable for these kinds of things, we want them to explain to society what is at stake and, and how we can find ways to deal with that.
So how do we ensure scientists live up to their responsibilities while giving them the relative autonomy that we’ve heard is so important? Well, according to Willem, it’s not just about regulation…
Part of how we keep scientists responsible is, on the one hand, by making them responsible. That is, we put them under research evaluation control systems, we make them apply to ethical committees if they’re going to do research with humans. There’s all kinds of regulatory systems applied to scientists to kind of force them to be responsible.
But I think it’s also important that we make clear to future scientists that we are actually giving them quite a lot of power when we hand them the keys to the laboratory. There’s a lot of powerful things you can do with science.
Therefore, you also need from scientists the right kind of mindset. And that right kind of mindset is a matter of socialisation, is a matter of teaching scientists how to behave, how to talk, and stressing how important it is for them to maintain this responsibility as part of the social contract for science.
Importantly, the limits of scientific autonomy are not fixed. Instead, they must be continually renegotiated in light of the issues we face in science and society today.
Most of our ideas about scientific autonomy were very much shaped by things that had happened in the 20th century, by the experience of the Second World War. But, in our time frame, there are all these new threats to scientific autonomy. By now we’ve discovered that science can have really deep biases, can be racist, can be sexist. Science can be manipulated by organised industrial interests on an enormous scale. So, for example, disproportionately highlighting the uncertainties of climate change or smoking.
And so the answers that we come up with now might help us now, but might in another couple of decades lead to other unintended consequences and may need to be readdressed and reassessed.
That’s it for this episode on freedom and responsibility in science from the International Science Council.
The ISC has released a discussion paper on these issues… You can find the paper and learn more about the ISC’s mission online, at council.science/podcast
Next time, we’ll be looking at science communication. How can we promote the spread of scientific knowledge while guarding against misinformation, and protecting scientists and researchers from online harassment? What skills do scientists need to reach new audiences? And what are their responsibilities when sharing information?
The information, opinions and recommendations presented by our guests are those of the individual contributors, and do not necessarily reflect the values and beliefs of the International Science Council.
Image by Drew Farwell on Unsplash.