Making climate models relevant for local decision-makers

Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earthโ€™s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city.ย 

Now, authors of a new open-access paper published inย the Journal of Advances in Modeling Earth Systems have found aย method toย leverage machine learning toย utilize the benefits of current climate models, while reducing the computational costs needed to run them.ย 

โ€œIt turns the traditional wisdom on its head,โ€ says Saiย Ravela, a principal research scientist inย MITโ€™s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPSย postdoc Anamitra Saha.ย 

Traditional wisdom

In climate modeling, downscaling is the process of usingย aย global climate model with coarse resolution to generate finer details over smaller regions.ย Imagine a digitalย picture: A global model is a large pictureย of the worldย with a low number of pixels. To downscale, youย zoom in on just the section of the photo you want to look at โ€” for example, Boston. But because the original picture was low resolution, theย new version is blurry; it doesnโ€™t giveย enough detail to be particularly useful.ย 

โ€œIf you go from coarse resolution to fine resolution, you have to add information somehow,โ€ explainsย Saha. Downscaling attempts to add that information back inย by filling in the missing pixels. โ€œThat addition of information can happenย two ways: Either it can come fromย theory, or it can come from data.โ€ย 

Conventional downscalingย often involves using models built on physicsย (such asย the process of air rising,ย cooling, and condensing, or theย landscape of the area), and supplementing it with statistical data taken fromย historicalย observations. But this method isย computationally taxing:ย It takes a lot ofย time andย computing power to run, while also being expensive.ย 

Aย littleย bit ofย bothย 

In their new paper,ย Saha andย Ravelaย have figured out a way to add the data another way.ย Theyโ€™ve employed a technique in machine learning called adversarial learning. It uses twoย machines: Oneย generatesย data to go into our photo. Butย theย other machineย judges the sample by comparing it toย actual data.ย If it thinksย the image is fake, thenย the firstย machineย has to try again until it convinces the second machine. The end-goal ofย the process is toย create super-resolution data.ย 

Using machine learningย techniques like adversarial learningย is not aย new idea in climate modeling; where itย currently struggles is its inability to handleย large amounts ofย basicย physics, likeย conservationย laws. The researchers discovered thatย simplifyingย the physics going in and supplementing it withย statistics from the historicalย data was enough to generate the results they needed.ย 

โ€œIf youย augment machine learning with some information from the statistics andย simplified physics both,ย thenย suddenly, itโ€™s magical,โ€ saysย Ravela.ย He and Saha started with estimatingย extreme rainfall amounts by removing more complex physics equations and focusing on water vapor andย landย topography. They then generated general rainfall patterns forย mountainousย Denver andย flatย Chicago alike, applying historical accounts to correct the output.ย โ€œItโ€™s giving us extremes, likeย the physics does, at a much lower cost. Andย itโ€™s giving us similar speeds to statistics, but at much higher resolution.โ€ย 

Another unexpected benefit of the results was how little training data was needed.ย โ€œThe fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the MLย [machine learning]ย model โ€ฆย was actually not obvious from theย beginning,โ€ saysย Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run.ย 

Quantifying risk quickly

Being able to run the models quickly and often is a key requirement for stakeholdersย such as insurance companies and local policymakers. Ravela gives the example of Bangladesh:ย By seeing how extreme weather events willย impact the country, decisions about what crops should be grown or where populations should migrate to can be made consideringย a very broad range of conditions andย uncertainties as soon as possible.

โ€œWe canโ€™t wait months or years to be able to quantify this risk,โ€ย he says. โ€œYou need to look out way into the futureย and at a large number of uncertaintiesย to be able to say what might be a good decision.โ€

While the current model only looks at extremeย precipitation, training it to examine otherย critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model,ย Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of aย Climate Grand Challenges project.

โ€œWeโ€™re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,โ€ย he says.ย 



Source link


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

>