Sign up

The urgent need for data to tackle climate disasters

As climate-related disasters intensify, specialists stress the need for interconnected global databases and improved early warning systems to safeguard vulnerable communities.

Climate-related disaster risk has been a key topic at the COP29 summit in Baku, Azerbaijan, which closes this week. With climate change driving an increase in disasters, experts at the summit have talked about the importance of improving and expanding early warning systems, as well as the development of an interconnected global database to track climate loss and damages.

At the basis of all of that work is one fundamental thing: data. “All the efforts on climate action hinge upon the availability of and access to data,” notes the UN office for Disaster Risk Reduction (UNDRR).

More effectively taking on challenges like mapping flood and wildfire risk zones, or assessing loss and damages requires all kinds of data: geospatial, meteorological, historical records and geographical data, among many others. 

In the face of the escalating climate crisis, experts say it’s vital to make the vast amounts of data available to scientists more accessible and useable, to inform both immediate decisions and long-term planning.

Solving those questions will be key to our ability to build systems that enable people to make quick, informed decisions in the face of increasing disasters – decisions which can be a matter of life and death. 

The importance of data for effective decision-making

“The background to getting to those decisions is all about data,” explained Bapon Fakhruddin, a hydro-meteorologist with extensive experience in climate risk and Water Resources Management Senior Specialist at the Green Climate Fund. Fakhruddin co-chairs the Task Group on FAIR Data for Disaster Risk Research at the ISC Committee on Data (CODATA), which works to improve access and usability of data on disaster risk, enabling more effective action and decision-making.

Fakhruddin points to an example of flooding in an agricultural area: if heavy rain appears in the weather forecast not long before crops are ready to be harvested, farmers need to make quick decisions. Should they flee to high ground immediately, or is it safe enough to stay at home? If flooding is expected, how much crop damage is likely? Would it be better to harvest early, or bet that enough of the crop will survive? 

Many different types of information influence those decisions – the initial weather forecast and all of its underlying data, as well as ongoing observations of rainfall and water depth. Then, agricultural information about the crops, including their maturity and flood tolerance – plus, data about the land itself, including the soil, topography and vegetation, alongside many other factors. 

The decisions that flow from that data will determine whether or not farmers facing disaster can bring food to their communities and keep earning a living. At the same time, this data can also feed early warning systems, helping authorities tune them to be more effective and accurate, and giving people time to prepare and make decisions. 

After disasters, data is key to making calculations about loss and damage. That’s important both at the national level, as people look for financial support from local authorities, and at the international level with the operationalization of the climate Loss and Damage Fund, which aims to provide compensation for the effects of climate disasters. 

With the stakes so high, specialists need to be able to collect and access reliable data, and understand its limitations. “If you don’t have a good dataset, and if you don’t have a standardized dataset, you’re actually doing ‘garbage in, garbage out.’ You’re underestimating your risk, or you’re overestimating your risk,” Fakhruddin said.

Working on disaster risk requires connecting measurements and models created by physical scientists with scenarios and policies developed by engineers, planners and other social scientists, he explained.

Bridging the data gap

Making sure that data from all of those sectors is not only available but also can be made to connect has been an ongoing battle. Guidelines on how to stitch together data from different areas were one of the major findings that came out of CODATA’s WorldFAIR project, which drew recommendations from many different fields. 

“Often, the biggest challenge is actually availability. We have the data, but it’s maybe in a different format, or it’s not publicly available,” Fakhruddin added. Data policies often keep valuable information private. And in other cases, the relevant data exists but can’t be accessed, often for frustratingly mundane reasons – like being forgotten on a digital drive whose owner left and took the password with them, he said. 

“If you’re able to locate those things, you can actually create a fascinating information product for the greater good,” Fakhruddin said. The first step is better data access policies – who can access data, and what can they do with it? How can that data be shared more efficiently between specialists and with other people who need it in times of crisis?

There’s also the question of training and capacity: expanding access to data needs to be paired with work to help specialists make the most of it across sectors. 

“I think that’s actually where there is the biggest gap in the data ecosystem, because not every sector is equally equipped and trained to manage and handle the dataset,” Fakhruddin added. “I think that is quite necessary at this moment, and you should actually look at a more holistic, integrated and common understanding of those data and value for data,” he said.


Picture by Greg Johnson on Unsplash

Disclaimer
The information, opinions and recommendations presented in our guest blogs are those of the individual contributors, and do not necessarily reflect the values and beliefs of the International Science Council

Please enable JavaScript in your browser to complete this form.

Stay up to date with our newsletters

Skip to content