Sign up

Podcast with Qiufan Chen: Science Fiction and the Future of Science: Values and Senses in Artificial Intelligence 

Qiufan Chen, award-winning Chinese speculative fiction writer, shares his view on the potential of science fiction to shape the future of science in the Centre for Science Futures' new podcast series, in partnership with Nature.

Scientists and researchers increasingly value science fiction for its contributions to anticipating future scenarios. As part of its mission to explore the directions in which changes in science and science systems are leading us, the Centre for Science Futures sat down with six leading science fiction authors to gather their perspectives on how science can meet the many societal challenges we will face in the next decades. The podcast is in partnership with Nature.

In our fifth episode, Qiufan Chen joins us to discuss agency and social responsibility in science as a human endeavour. For Chen, this is especially applicable to artificial intelligence. During the podcast, he walks us through the impacts of AI on the future of scientific research and how AI developments can be more regulated and thus made more ethical.

Subscribe and listen via your favourite platform


Qiufan Chen

Qiufan Chen is an award-winning Chinese speculative fiction writer, author of The Waste Tide and co-author of AI 2041: Ten Visions for Our Future. He is also a research scholar at Yale University and a fellow of Berggruen Institute, USA. Our main discussion centres around artificial intelligence, how we can harness the power of this technology while avoiding the dangers it poses. 


Transcript

Paul Shrivastava (00:04):

Hi, I’m Paul Shrivastava from the Pennsylvania State University. And, in this podcast series I’m speaking to some of today’s leading science fiction writers. I want to hear their views on the future of science and how it must transform to meet the challenges we face in the years ahead.

Qiufan Chen (00:24):

AI in the future, maybe it could be used to help us to reflect ourself as a mirror, to make us become a better human being.

Paul Shrivastava (00:33):

Today, I’m talking to Qiufan Stanley Chen, an award-winning Chinese writer. I read his novel, The Waste Tide many years ago, and was impressed by his portrayal of the predicaments of electronic waste. His most recent co-authored book AI 2041: 10 Visions of Our Future, vividly combines imaginative stories with scientific forecasts. We spoke a lot about artificial intelligence and how we can harness the power of this incredible technology, while avoiding some of the dangers it poses. 

Thank you very much for joining us, Stan. Welcome. It’s amazing the range of scientific topics that you have mastery over is really notable. How did you come to be interested in these scientific topics?

Qiufan Chen (01:28):

So, as a sci-fi fan, I have to admit that I started from all of those Star Wars, Star Trek, Jurassic Park, classic sci-fi movies and books, animations back in the day. Each time it gave me a lot of new inspiration and ideas. So, I was always totally fascinated by all this signs, imagination of the future and outer space and even species millions of years ago. So, how we brought them back to life.

Paul Shrivastava (02:02):

So, science has been going on for a very long time. What is your general view on science as a human endeavor?

Qiufan Chen (02:13):

To me, it is definitely a huge achievement. And, of course, it make us living a better condition as a human being. And, when we look back to history, I have to admit that there’s a lot of challenges, because it feels to me like the agency is not absolutely in the hands of human beings. Sometimes I feel that maybe science and technology, just like some kind of species, like some kind of biological beings, it has its own purpose. It has its own birth life cycle. It wants to be and evolve together with human beings. So, we are like the host, they’re like the virus. We can see it in that way or the other way around. So, I always feel that there’s really deeply entanglement between science and human beings. So, sometimes I feel that we’ve been changed a lot by all this development of science and technology, but we never know what is the direction ahead of us.

Paul Shrivastava (03:24):

Well, let’s make it more concrete and focus on what is top of mind right now, which is artificial intelligence. How can we ensure that the development of AI, we bring in social justice and ethical and moral considerations into bear?

Qiufan Chen (03:40):

The problem is we didn’t fully invest to build up this kind of regulation and framework of ethically prevent something negative from happening. I think we need more diversity on AI, and especially on large language model, because we’re talking about specifically alignment. So, even among human beings in different countries, cultures, language, we didn’t have this shared alignment as a single standard. So, how can we teach the machine, the AI, to be aligned with human value system or the standards as one integral one? So, I think this is something very preliminary. But, I think the key input should be not only from the tech companies, from the engineers, from all these people doing the thing in the industry, but also from the interdisciplinary world, such as anthropologies and psychologies, sociology, for example. We need more diverse perspective from humanities, because AI is supposed to be built for the people, to serve the people. But, the human factor right now, I can feel that is quite missing in the loop.

Paul Shrivastava (05:11):

So in your view, how will these technological advances change the way that science will be done in the future?

Qiufan Chen (05:19):

It seems to me like this is a totally new paradigm shift that scientists can using AI to seeking for new patterns, predicting the protein structure and finding the correlation within huge amount of datas. I think this is going to be something revolutionary. But also there’s a lot of concerns within this process. For example, we can right now predict millions of protein structure, but the problem is, how much percentage of all these predictions of protein structure are valid and is effective to the real disease and the real human body? And another thing is, all this revolutionized area is very focusing on accumulating a huge amount of data set. Are this data collecting from what kind of group? What kind of population? And are they sharing this data with notice, everything have been used for? And are we sharing the data among different group of scientists or researchers? So I think this is always something about how we can build up this kind of counterbalancing system to minimalize the risk and challenges, meanwhile to really fulfill the demands of the market and make the best benefits for the people.

Paul Shrivastava (06:56):

Yeah, I think building the checks and balances system is an important part of the development of AI. But environment impacts of artificial intelligence itself are rarely mentioned in the public science narratives.

Qiufan Chen (07:13):

This is something very paradoxical, because AI, it requires so much power. And it need real-time computation. It needs so much extraction from the environment. But meanwhile, we can use it to detect the wild fire from the satellite. We can use it to protect the biodiversity. We can use it to find a new solution as the energy storage of the battery, and the smart grids and maybe even the nuclear fusion technology in the future. So if you use it in the right way, it can definitely protect us and fight against the climate change.

Paul Shrivastava (08:03):

At some point in the future, do you think that AI will understand more than what humans can understand?

Qiufan Chen (08:13):

So, what I’ve been thinking about is some model, like large model beyond human. For example, the data’s from animal, plants, fungi, even from micro and the whole environment. So, we’re talking about the whole Earth model. We need to deploy this kind of sensor layers around the world. So, maybe we can using smart dust, which was mentioned in Lem’s novel The Invincible. So, you’re talking about all this swarm of small dust, basically it’s a collective intelligence. And, human can learn so much from this kind of large model, because it help us to perceive something beyond our sensory system and beyond human. Then we can be less human-centric, and we can be more compassion about other species. And, maybe that would be the solution to fight against the climate change, because we can feel how the other species feel and all this pain, all this suffering, all this sacrificing could be something tangible and real.

Paul Shrivastava (09:36):

Wonderful. Imagining artificial intelligence in the model of humans is actually an inferior way of thinking about artificial … The more superior way, what you call the whole-world model is the way to develop.

Qiufan Chen (09:54):

Yeah. So, this reminds me of Buddhism, because in Buddhism, like all the sentient species are as equal as possible, and there’s not such thing as human beings supposed to be premier than others. So, I’m always thinking about we need to find a way to embed all this philosophy and values of Buddhism and Taoism into the machine.

Paul Shrivastava (10:27):

So, I’m wondering, you understand the technical elements of AI. Can AI be trained in Buddhism, in Taoism? Because all the books and values are already codified. Is it possible to find AI that trains on them and creates a synthetic world religion, if you will?

Qiufan Chen (10:50):

It definitely could, and it could do a better job than any of the priests, any of the monk, any of the gurus in the world, because it’s so knowledgeable. But, as a practitioner of Taoism, there’s something beyond the synthetic understanding of all this, call it religious or spiritual experience, is something in body. So, you have to do all this physical homework. So, I think this is something still AI lack of. It didn’t have a body, it didn’t have the complex sensory system, it didn’t have self-awareness, for example. And, I think all of those part is what makes a human, human. AI in the future, maybe it could be used to help us to reflect ourself as a mirror, to make us become a better human being.

Paul Shrivastava (11:49):

In your imagination, can AI have soul?

Qiufan Chen (11:54):

The emergence of consciousness is basically a mystery in science right now. So it feels to me there’s definitely some connection between large language model emergent ability with all of those emergence phenomenons in classic or quantum physics complexity systems. So I think mathematically, maybe someday we can prove the existence of consciousness. But it’s not zero or one status, but it’s like the continuing spectrum of status. So that means maybe even a rock, even a tree, even the river or the mountain have some certain level of consciousness, but we just didn’t recognize it because we’re so human-centric. But it’s all about computation. It’s all about time space compression. It’s all about information preservation. So it’s all about reduction of entropy. So it’s not an epistemology question, but I think it’s an ontological question. So it’s about existence.

Paul Shrivastava (13:14):

Thank you for listening to this podcast from the International Science Council’s Centre for Science Futures done in partnership with the Arthur C. Clarke Center for Human Imagination at UC San Diego Visit futures.council.science to discover more work by the Centre for Science Futures. It focuses on emerging trends in science and research systems and provides options and tools to make better informed decisions.


Paul Shrivastava, Professor of Management and Organizations at Pennsylvania State University, hosted the podcast series. He specialises in the implementation of Sustainable Development Goals. The podcast is also done in collaboration with the Arthur C. Clarke Center for Human Imagination at the University of California, San Diego.

The project was overseen by Mathieu Denis and carried by Dong Liu, from the Centre for Science Futures, the ISC’s think tank.


Please enable JavaScript in your browser to complete this form.

Stay up to date with our newsletters


Photo from KOMMERS on Unsplash.


Disclaimer
The information, opinions and recommendations presented in our guest blogs are those of the individual contributors, and do not necessarily reflect the values and beliefs of the International Science Council

Skip to content