Scientists and researchers increasingly value science fiction for its contributions to anticipating future scenarios. As part of its mission to explore the directions in which changes in science and science systems are leading us, the Centre for Science Futures sat down with six leading science fiction authors to gather their perspectives on how science can meet the many societal challenges we will face in the next decades. The podcast is in partnership with Nature.
In our sixth and final episode, Cory Doctorow joins us to discuss the issue of trust in science and what we can do to strengthen it. He covers issues with artificial intelligence and concerning algorithms and training models. For Doctorow, he wants to see how the coordinative power of digital technology can be harnessed towards a more sustainable future.
Cory Doctorow
He is the author of many books, most recently The Lost Cause, a solarpunk science fiction novel of hope amidst the climate emergency and The Internet Con: How to Seize the Means of Computation & Red Team Blues. In 2020, he was inducted into Canadian Science Fiction and Fantasy Hall of Fame. Born in Toronto, he now lives in Los Angeles.
Paul Shrivastava (00:03):
Hi, I am Paul Shrivastava from the Pennsylvania State University. In this podcast series, I’m speaking to some of the world’s leading science fiction writers. I want to hear from them how science can help us tackle the many-sided challenges ahead. After all, they make a living from thinking about the future and how it could or should be.
In this episode, I’m talking to Cory Doctorow, a science fiction novelist, journalist and technology activist. For the last two decades, he has published many works on tech monopolies and digital surveillance. Our conversation touched on digital rights management and social justice and sustainability in the digital world. I hope you enjoy it.
Welcome, Cory, and thank you for being part of this podcast. Can you begin by telling us a little bit more about your relationship with science, broadly, and with science fiction writing?
Cory Doctorow (01:05):
Well, I grew up under extremely fortunate circumstances for someone interested in science fiction. I grew up specifically in Toronto in the 1980s. And there was a woman there who was quite a whirlwind in the field, a woman named Judith Merril, a great writer, editor and critic. She was the doyenne of the British new wave of science fiction. And, so, Judy would allow anyone to bring down their stories and workshop them with her, she would critique them. So this was like… I don’t know. It’s like getting your physics homework help from Einstein. And then she started these writing workshops where the promising writers that came to her, she’d gang them up into weekly meetings. And so I was in one of those for many years, and I just had as close to a formal apprenticeship in science fiction as possible.
In terms of science, you know, I’m a dilettante. The closest I come to being a scientist is having an honorary degree in computer science from the Open University where I’m a visiting professor of CS. And, in particular, I’ve had a great policy relationship with computer science because for more than 20 years now, I’ve worked in a field we could broadly call digital human rights, related to access to information, censorship, privacy and equity online.
Paul Shrivastava (02:17):
So let’s dig a little bit deeper into some of these issues. You’ve dealt with a range of these topics relating to technological advancements and on whose interests and favour they work. You’ve talked about surveillance technology in Little Brother, copyright laws in Pirate Cinema, to cryptocurrency in Red Team Blues. Very often, the narratives portray the negative consequences of unchecked technological growth, or technological growth in the service of capitalism, if you will. So how do you perceive the role of science in this increasingly digital landscape that we are entering in?
Cory Doctorow (02:57):
I think that you can’t have science without equity. In the sense that the thing that distinguishes science from the forms of knowledge creation that precede the enlightenment is access, which is the precondition for adversarial peer review. And so leaving aside a moral obligation—which I think we can say that we all have moral duties to one another—I think that there’s just an instrumental case for saying that if other people aren’t allowed to inspect your data and your methods, and try to replicate your work and criticize you freely, then you’re not doing science. Alchemists did a thing that looked a lot like science, right? They observed the world, they formulated a hypothesis, they designed an experiment, they ran the experiment, and then they all died of drinking mercury. Because it turns out that you can kid yourself that the experiment was a success right up to the point that the mercury poisoning kills you.
Cory Doctorow (04:05):
And the difference between alchemy and science is not that the scientists that came after were smarter or less prone to self delusion. It’s that they were subject to the rigours of adversarial peer review, which as a precondition, requires publication and access. And I think that when you have a concentration of power in the commercial sector, which is to say monopoly, it’s very hard for regulators to remain independent. Those firms become too big to fail and too big to jail. Then you actually create the conditions for people denying science, which has disastrous consequences for themselves, but also for all of us.
Paul Shrivastava (04:39):
Well, I agree with you that there is need. I think the capture that you’re referring to by forces of corporations and governments, who are the two primary sources of funding for science, that capture is complete. And now we are looking at artificial intelligence as a pervasive scientific endeavour that is going to change everything. What kind of policy recommendations could you propose for that whole arena?
Cory Doctorow (05:09):
Well, I’d like to start by saying that as the first person to mention AI, you owe everyone else on this call a drink. That is the rule now with AI. Let me start with a caveat, which is that I’m not convinced that AI is what you say, is this pervasive scientific endeavour that’s going to change everything, for lots of reasons. I am skeptical that without close supervision that AI will be able to produce reliable… things that are reliable enough to use in high stakes environments. And if the supervision requires the same amount of diligence that we had before, then I don’t know that there’s a case for it. I think if we’re prudent in terms of our regulation and we say, “Look, if the AI can hallucinate and if the hallucination leads to lethal consequences, the AI can only be supervised at a kind of a ratio of one-to-one.” If the self-driving car drives safely 90% of the time and 10% of the time accelerates into oncoming traffic, then the number of driver-supervisors you need for each self-driving car is one, which is to say you don’t get to fire a single driver. So now you’ve just got a more expensive car.
Cory Doctorow (06:14):
And I think that any bubble that depends on continuing to attract investment capital that mostly gets lit on fire before any return is generated has to really run on a lot of hype. And we see that hype around us to an enormous extent. Instead of worrying about the actual manifest worries about AI, which is the decision support algorithm that denies you a mortgage because of your race, or that sends your kid into protective services because of your economic status, or that denies you bail or entry into a country, we’re focusing on—frankly, in my professional capacity—bad science fiction about the autocomplete on steroids, waking up and turning us all into paperclips. That leaves aside the real material stuff that’s going on with AI.
Paul Shrivastava (07:08):
So what is the role of science communicators in bursting this bubble, the hype that has been built up around AI? I mean, the general narrative out there is that it’s going to change everything. And what I’m hearing from you that there are some real base fundamental issues underlying it.
Cory Doctorow (07:26):
I think there are some gaps in the main line of science communication about AI that would be fruitfully filled. So I have never heard a popular science programme describe the potential limits of federated learning. E.g. What happens if we turn off the big servers? What if the investors just move on? What does AI look like if we never train another major model and all we do is tune the existing models that can run on commodity hardware? And then a taxonomy of applications that are not sensitive to the commonly understood problems with AI, so those low stakes ones or those resilient applications. What are those applications? If we take all the applications where you need one-to-one supervision, which ones are those? And we take those out, and then what’s left?
Paul Shrivastava (08:15):
Let’s move on to talking about the period of the Anthropocene. Processes that support life are now changing, if not collapsing outright. How can we leverage the advancement in the digital world, which you’ve covered in so many different ways, to mitigate the human impact on environment and ensure a sustainable future?
Cory Doctorow (08:38):
My latest novel is a novel about this, it’s called The Lost Cause. And the thing that’s happened in this novel is not a deus ex. We have not figured out how to do carbon capture at a rate that defies all of the current state-of-the-art. But what we have done is we’ve taken it seriously. Here we are, you know, trapped on this bus, barreling towards a cliff. And the people in the front rows and first class keep saying, there’s no cliff. And if there is a cliff, we’ll just keep accelerating until we go over it. And one thing that we know for sure is we can’t swerve. If we swerve, the bus could roll and someone might break their arm, and no one wants a broken arm. And this is a book where people grab the wheel and swerve. Where millions of people are engaged in very serious long-term projects to do things like relocate every coastal city, several kilometres inland.
Cory Doctorow (09:32):
And that climate adaptation, when you contemplate it, it’s quite dizzying. It can feel a little demoralizing to think, well, I guess all the spare labour that everyone has for the next 300 years is going to go into fixing these foolish errors that we made before. And so this is a book that’s about that project. And it’s about pursuing that project along the insights of a dear friend of mine who’s written a very good book recently, Debbie Chachra, whose book is called How Infrastructure Works. And Deb’s a material scientist, and she points out that energy is effectively infinitely abundant, but materials are very scarce. And yet for most of human history, we treated materials as abundant, use them once and threw them away. And we treated energy as scarce. And there is a technical reorientation that’s latent in this book and that Deb makes very explicit in her book, in which we do things like use more energy to produce things so that they are more easily decomposed back into the material stream.
Paul Shrivastava (10:38):
We seem to be busy consuming the planet at an unprecedented pace. And can science fiction be an aid somehow in helping humans reformulate their worldview so that it’s more compatible with what’s going on over here – our challenges on this planet?
Cory Doctorow (10:54):
Well, and this is something I’ve been writing about since my novel Walkaway, in 2017. This idea that abundance arises out of access to material, but also the social construction of what we want. And finally, the efficiency of distributing goods. So I am a homeowner, and that means that three times a year I need to make a hole in a wall. And so I own a drill, and I jokingly call it the minimum viable drill. It’s the drill that is economically rational for someone who makes three holes a year to own. And I have to give up, like, a whole drawer to storing this awful drill. And, what you realize is that you are paying an enormous tax, both in the calibre of goods that you have and the availability of space in your home, to maintain access to things that you rarely need. There’s another kind of drill, I sometimes call it the library socialism drill, where there’s just, like, a stochastic cloud of drills in your neighborhood that know where they are, that maintain telemetry on their usage to improve future manufacturing. They readily decompose back into the material stream. And you can always lay hand on a drill when you need it, and it’s the greatest drill ever made.
Cory Doctorow (12:08):
Multiply that by lawnmowers and the extra plates that you keep for Christmas or dinner parties, and all the other things that are in your house that you don’t need all the time. And that is a world of enormous abundance. That is more luxury. And when you combine those three things, the efficiency of material and energy use, the coordinative nature of technology, and the engineering of our desire, there is a future in which we live with a much smaller material and energy footprint and have a much more luxurious life. A life of enormous abundance.
Paul Shrivastava (12:42):
On that hopeful message, I’m going to give you one last question. And that is, if there was one lesson for science to learn from science fiction, what would that be in your mind?
Cory Doctorow (12:56):
I would say that the most important thing that science fiction does, in respect of science, is challenge the social relations of technology and of scientific discovery and scientific knowledge. The most important question about technology is rarely, what does this do? But rather, who does it do it for and who does it do it to? And that technology under democratic control is very different from technology that is imposed on people.
The idea that a technology designed with the humility to understand that you cannot predict the circumstances under which that technology will be used – and so you leave the space for the users themselves to adapt it – that is the best of all technical worlds. And every language has a name for this. You could call it a bodge, which is sometimes a bit pejorative. But I think we all like a good bodge. In French it’s bricolage. In Hindi, it’s jugaad.
Paul Shrivastava (14:00):
Jugaad!
Cory Doctorow (14:02):
Every language has a word for this, and we love it. And it’s only through the humility to anticipate the unanticipatable, that we are the worthy ancestors to our intellectual descendants who will come after us.
Paul Shrivastava (14:22):
Thank you for listening to this podcast from the International Science Council’s Centre for Science Futures done in partnership with the Arthur C. Clarke Center for Human Imagination at UC San Diego Visit futures.council.science to discover more work by the Centre for Science Futures. It focuses on emerging trends in science and research systems and provides options and tools to make better informed decisions.
Paul Shrivastava, Professor of Management and Organizations at Pennsylvania State University, hosted the podcast series. He specialises in the implementation of Sustainable Development Goals. The podcast is also done in collaboration with the Arthur C. Clarke Center for Human Imagination at the University of California, San Diego.
The project was overseen by Mathieu Denis and carried by Dong Liu, from the Centre for Science Futures, the ISC’s think tank.
Photo from Elimende Inagella on Unsplash.
Disclaimer
The information, opinions and recommendations presented in our guest blogs are those of the individual contributors, and do not necessarily reflect the values and beliefs of the International Science Council