The debate about post-normal science starts with an observation: that we live in a world where facts aren’t certain, stakes for decisions are high, and those decisions are urgent. What does that mean for science, and what does that mean if science wants to inform policy-making?
Sarewitz: Whatever science you’re doing on a post-normal problem, it is always going to be incomplete, and it is always going to be subject to revision, and highly uncertain. It can be viewed from numerous scientific perspectives. So multiple scientific studies can come up with multiple results, so it leads to a profusion of truths that can be mobilized on behalf of different sets of values. Values and facts can pair up with each other in different ways.
One example I love is how everyone talks about how there’s a consensus on GMOs. Well there is consensus around a narrow part of the GMO issue, like there is a consensus around a narrow part of climate change. But the real problems have to do with the ‚what could be done?‘ questions. So for GMOs for example, when people say there is a consensus, what they mean is ‚we know they’re not a health risk‘. So I’ll accept it on health risk, I don’t have a problem with it. But then you say, ‚and we know that they’ll be an essential part of the economic future of Africa‘. Well, maybe that’s true — whose model are you using? What kind of data have you used to generate that? What are your assumptions? I mean anything dealing with projections of the future and claims about how the world is going to look, in a multi-variate, open system, are going to be subject to different people coming up with different claims and conclusions. And that’s exactly what happens.
And when you bring science into the political debate, you have to pick and choose which science you want to use. You have to match that with particular priorities about what policy problems you want to solve. I think science is really important, I think we want to be factual, I think we want to have a grip on reality and I think science can help us do that. But for problems where there are so many paths forward, so many competing values, the systems themselves are so complicated, I don’t think science is a privileged part of the solution.
But on the other hand, if you can have a political agreement around what ought to be done, then science can serve very well. Because then you can know how to bound the problem, and people aren’t going to argue so much about the results. And this is why it is much easier to deal with an emergency than it is with a long, drawn-out, chronic problem. Because in an emergency there is value convergence, everyone wants to solve the emergency, and it’s very well defined. Also, you get feedbacks. If the science is not good, you’re going to find out, right? None of these things pertain in these bigger, chronic, more drawn-out problems.
If you’re an individual or an organisation working at the boundary between science and policy, how can you deal with this mismatch between science not being able to provide definitive answers yet policymakers requesting exactly that?
Sarewitz: By building processes where there’s much more regular communication between the producers of knowledge and the users of knowledge. One example I like to use is this environmental research group at the US Department of Defense. Which successfully solved all sorts of environmental problems that the civilian side couldn’t solve. And the reason was that the DOD is not politicized, they are very mission-oriented, they didn’t try to commission basic research to understand all of the aspects of the problem, they simply needed a problem solved. With things like protecting endangered species, which we have a lot of trouble doing in the civilian sector, they have been remarkably effective. It’s a case where the science and the users of the science really occupy the same institutional setting, they work together towards the same end.
But that sounds a bit like I can only use science in my decision-making that I have commissioned myself.
Sarewitz: At my university we have something called the ‚decision centre for a desert city‘. ASU is in the middle of the desert, they get almost no rain, there’s four million people that need a lot of water. There’s a lot of economic interests behind there, plus the survival of the people who live there. I think this centre has been successful because over the years they’ve built up relationships with water managers. That has allowed them to maintain their independence as academic researchers, but also understand the context of use that the water managers are faced with.
Another example: The National Oceanographic and Atmospheric Administration (NOAA) runs a programme called Regional Integrated Sciences and Assessments, RISA, and the idea is that for areas with natural resource issues, for example, water issues, land use issues, natural hazard issues, that the scientists who are funded by the government agencies should work with decision-makers to help craft their research agendas. And again, the scientists are still independent, they don’t work in the decision-makers offices and the research isn’t paid for by them, but they can internalize the constraint the decision-makers have and the nature of their problem, and craft their research in ways that provides useful information. So it’s what you could think of as a kind of reconciliation between the demand function and the supply function, through living with each other, through getting to know each other.
Through getting much closer links and more frequent communication.
Sarewitz: Yes, and continual communication. But I think your point about does the organization need to pay for it is a great one, because to maintain independence, maybe it’s better that often they don’t. I think the RISA case, and the ASU desert/water case, are examples where the researchers are politically insulated. Their money doesn’t come from decision-makers, but they do hang out with one another on a continual basis. So I think there’s all sorts of good, small examples like that, but they take really focused attention and appropriate institutional structures.
So is it also about anchoring the big problems much more locally?
Sarewitz: That’s a great question. Because obviously there are some problems that are big problems. I think when things can be made contextually sensitive at the local or regional level it is often very helpful. Yet a lot of times science funding processes aren’t particularly set up for that. But I don’t think applying these ideas at bigger scales is impossible. For example, you can think nationally about issues like energy technology innovation, a really contested issue, all sorts of different views about what technologies we should be doing and how we should do them, but you can still work at the national level. Compare the US and Germany and their different approaches to energy innovation. So I don’t think it has to be local. It depends on the problem.
Despite this recognition of living in a post-normal mode, many people still seem to have a hard time letting go of what’s called the deficit model of communicating science. The idea is that if only science were communicated better, then the public will understand and change their behavior. But there is overwhelming evidence that this model just doesn’t work. Why do you think this idea is so resilient?
Sarewitz: Well, and also I should say I don’t think most people buy into the post-normal model. And it’s not that they’re not capable, they may never have been exposed to it. The post-normal science idea really does challenge the notion of science as a unitary thing that tells us what to do, PNS really says that we have to think of science in a different way in these contested contexts, and I don’t think most scientists want to go there. The deficit model puts them in charge: “we communicate the facts, you listen and take action.” So if the problem isn’t solved it’s not science’s problem. This is a self-serving superstition that the scientific community generally holds. And superstitions are hard to destabilize.
At the same time, also from my own personal experience talking to scientists that really care about making a societal impact, they just don’t know what the alternative is. I wonder whether you have an idea.
Sarewitz: Well, the answer might not always be with scientists doing something. It might be that we need different sorts of institutions. I think there are certain things that scientists shouldn’t be doing, which is making claims about expertise where they don’t have it, being dismissive of the public. I just think those things are unhelpful and reinforce this notion of privilege, even as individuals can’t help but look at the world and see that science is not a coherent thing that speaks one truth about all these issues. So one thing we could do would be to be more reflective about our enterprise, more honest and more humble about it, for a start.
But beyond that, I think we have huge institutional problems around science, and they’re not going to be dealt with by individual scientists. The leaders of the scientific community really need to step up on these issues. Policymakers who are serious about science policy need to step up on these issues. And I would actually say we should stop expecting individual scientists to do so much, because that’s part of the problem, this model that if only every individual scientist would communicate what they’re doing clearly to the world, then everyone would understand things and we’d all be more rational and our problems would go away.
You’re touching upon some of the issues here that you wrote about in your article „Saving Science“ as well, about how the way science systems are set up encourages research that is mediocre, doesn’t have any application, or is just plain wrong. So I was just wondering — what are in your opinion the key things that are wrong in the science system today.
Sarewitz: Well, I wrote fourteen thousand words about it, so…
Could you bring those down to one hundred.
Sarewitz: Well, first the idea that science is, and can be, and should be free is pretty meaningless. I also think it’s dangerous, because it has led to the idea that accountability for science is only an internal matter for the scientific community itself, that you don’t have to be accountable to the outside world. That really means that you don’t depend on feedback from the outside world, to help check that the science you’re doing is either worthwhile or any good. One of the reasons that all this poor quality science has come to light is because industry, you know, that we demonize, began to look at some of the results in biomedical science that they were using to try to develop drugs and couldn’t replicate them. This lack of accountability derives, I believe, from this ideal of pure, insulated science.
And another part of the problem is that so much science is being done on these big, open problems, where there’s really no way to know what’s good science, what’s a meaningful result. There’s no way to test. There’s no way to get feedbacks from the real system. In some ways we’re asking questions that are not answerable by science. It doesn’t mean you shouldn’t do research on them. But take the issue of nutritional advice that’s constantly oscillating, should you have caffeine or shouldn’t you, should you have red wine or shouldn’t you. I think the real lesson is, we aren’t asking the right kinds of questions. There are no answers to those questions. It depends. It’s contextual.
So there are problems related to isolation and internal accountability. There are also increasing amounts of science focused on trans-science or post-normal science problems, where it’s very very difficult to actually say anything about quality whatsoever, and it’s really easy for scientists to come up with results that look meaningful but aren’t.
And then of course there is the horrible incentive system to just publish, publish, publish, get grants, get grants, get grants. All that leads to this systemic positive of bias and if you combine those incentives with the other problems of isolation and accountability, you basically have a system out of control.
You already said that maybe it isn’t individual scientists that we should be asking to change the system. Who can change it?
Sarewitz: Right. Very difficult. I think many things have to happen. As I said, one thing is, leadership really needs to step up and say we have a really serious problem and we need to take it seriously. Policymakers need to not politicize that, which is very difficult for them, right? Senior scientists can step back, they don’t have to keep acting like gerbils on a wheel, they can say I won’t do bad science anymore. Or I am not going to answer unanswerable questions. Or I am going to be more modest about my results, or I am going to publish fewer papers. I am going to stop producing as many PhD students who aren’t going to get jobs later.
I think the scientific community could back away from some of the stereotypes of the idealized, platonic notion of science as this thing that gives us perfect truth. They all know it’s not true, but it’s a convenient kind of myth. A little more honesty about the nature of the enterprise. So there’s lots of things that are going to have to happen.
And then I also think — this is something I’ve tried to do in my own little modest way — let’s look for places where things are working really well. And let’s both understand why they’re working well, so that we can use that as a model, but also celebrate those particular things. They tend to be small and more marginal, often counter-cultural and against the grain.
I just want to return, one more time, to what you call trans-science: the big questions, where you say that maybe these aren’t questions we should be asking of science, or only of science. Do you think the societal response to these questions needs to shift away maybe from what is the right thing to do, and more towards — what is the thing that we want to do?
Sarewitz: Well, the question, what’s the thing we want to do, is something that needs to be politically established. And there’s no point, I believe, in continuing to collect facts about what is to be done until we have some closure about what we ought to do. Now those aren’t entirely distinct. But they are not nearly as linked as we say they are. There was plenty of good data around climate change in 1990, that suggested things should be done, and people started talking about it then. We didn’t need 20 more years of climate models during which, actually the uncertainties and the policies got worse and worse not better and better, for reasons that I don’t want to talk about now.
But I think one thing we have to give up on, because I think it’s wrong, is the idea that first we can get the science right, and then we’ll know what to do and how to do it. I think first we have to be clear about what are the values at stake. Who are the potential winners and losers from different types of options. And then use that to inform both political debate and knowledge creation on behalf of different types of options, knowing that they’re going to get fought out politically. And I think there are things we do that for. But too often, — and I mean politicians are totally complicit in this, what would they rather do, have somebody do research or have to make a difficult decision, right? So they get to say, do the research and tell us what to do, and scientists get to say great!
We don’t yet know.
Sarewitz. Yeah. It’s a kind of tacit conspiracy.
In your article you touch upon big data as something that risks making the problems of science worse rather than better. Everyone is looking to it as like this incredible huge pool of scientific discoveries that we can make.
Sarewitz: Yeah. I think it will be really useful for some things like self-driving cars, you’ll need infinite amounts of geospatial data and all that. So for those kind of technological applications where you get fast feedbacks, big data is fantastic. But for trans-science problems, where you can wade into the data, look for the causal relation that you think might be worth testing, and do some statistical tests on it, I think that we’re going to end up seeing that the noise around these issues is going to get worse and worse. Scientists are going to be able to find many more little bits of truth within these complex issues that still don’t add up to any particular coherent view of them. It’s going to make the problem worse not better, because it’s going to give scientists a bigger reservoir to play around in in searching for causal relationships. But we know that for complex problems there are no single causal relationships. So unless you can put together whole networks of them to understand how they work…
But wouldn’t that be the ultimate end goal of big data?
Sarewitz: It might be, but that’s the ultimate end goal of what’s known as Laplace’s demon, that is a comprehensive model of everything, but remember that a comprehensive model of everything is the thing itself. So any time you go below that you have to make assumptions. Any time you make assumptions you’ll have biases included. So we can do pretty well on certain kinds of models, especially the ones where we get feedback, weather forecasts, every day you get to find out if your forecast was any good. But for things where we don’t get those kinds of feedbacks, I think the idea that comprehensive modeling can provide predictive and certain knowledge is illusory.