Tuesday, December 17, 2024

What do models actually tell us about the future of climate change?

Many people in high-income nations believe that climate change is the defining issue of our time. There is no arguing that it is incredibly important – because regardless of arguments over how much climate is changing and what influences it may actually exert, government policy and social movements inspired by climate-change concerns affect us all.

This situation was illustrated graphically in a recent BBC interview with Dr. Irfaan Ali, President of Guyana, where immense oil reserves have been discovered recently.

He explained that even though energy security and modern living standards are top priorities for the people of Guyana, achieving them by developing natural resources is hindered by western attitudes and platitudes about climate.

But how do we know that climate change is really so important? Every day, headlines proclaim the latest extreme weather event as evidence of climate change – despite the very definition of climate being “the long-term weather pattern in a region, typically averaged over 30 years.” 

In actuality, global studies at multi-decadal time scales demonstrate that the frequency and severity of most extreme weather events have not changed significantly over the past 50 years (What the IPCC Actually Says About Extreme Weather).

Regardless of how we view today’s weather events, climate crisis advocates hold that climate models forecast dramatic and disastrous changes in the near future.

This line of thought holds that we may reach “tipping points” that will trigger sudden events: e.g., much of Antarctica’s ice will melt quickly, or the Gulf Stream that keeps Europe a warmer and more habitable place than equivalent latitudes elsewhere will suddenly stop flowing.

So the big worry really comes down to models telling us that we ain’t seen nothing yet when it comes to increased frequency of extreme and damaging weather events. But what are these models? What do they actually say, and how reliable are they?

Mathematical models 

Computer-generated mathematical models are useful tools to help science understand complex and ever-changing physical systems – such as climate, oceans, or even underground oil and gas reservoirs.

Models help us understand controllers of those systems, and what might happen when we change key components. I have written a little about models in past articles (Canadian energy modelling flawed and futile, Use of scenarios in our energy future is double-edged sword), but let’s dig in to learn more.

To start building a model, we subdivide what we want to model – such as the atmosphere over a region, a bay or inlet, or a body of rock containing oil and gas – into many small blocks (“cells” in model terminology).

In the software, each cell is assigned specific properties – “populated” – by data and mathematical relationships representing the characteristics of the physical space it represents.

Ocean model cells are populated with values of temperature, salinity, and suspended sediment load, while reservoir cells are populated with rock type, porosity, and fluid saturations. We can describe the entire model volume within the software using millions of cells, each with its own characteristics and position in space.

How do we know what values to assign to all of these cells? We measure values from water samples taken throughout the bay, or rock samples and geophysical logs taken from oil wells – but those samples are only a tiny fraction of the entire model volume, and can directly populate only a few cells corresponding to the place of measurement.

For the rest of the model, we have to infer values based on our understanding of how the physical system works – surface waters may be warmer and bottom waters more sediment-rich, while rock properties can be extrapolated based on known geological relationships, and reservoir fluids are usually layered (gas over oil, over water).

Once all the cells are populated, we have created a static model, representing the system at a moment in time. But we are really interested in how the system will change over time – so we need to upgrade our static model to a dynamic model. A dynamic model can simulate currents and property changes in our bay, or production of oil, gas and water from our reservoir as time goes by.

Creating the dynamic model is an entirely new challenge, as we have to mathematically define dynamic, or time-dependent, relationships between all of the model cells.

If the cells on the surface of the bay model are warmed by the sun, how is this warming transferred to the underlying cells?

If a barrel worth of oil flows through a reservoir cell toward a wellbore, how quickly can it move, and how does it affect the fluid content and reservoir pressure in the adjoining cells?

Considering that our models contain millions of cells, each of them interacting in many different ways, the computational requirements become immense. Models have in fact improved dramatically in the past decade simply because of the growing capabilities of computers, but we still have to make many simplifying assumptions to simulate behaviour over a significant period of time.

Once built, we can test the abilities of our dynamic model to represent reality by history matching. To do that, we populate the initial static model with all available data, then start the clock by applying dynamic relationships successively, one time period at a time.

Each tick of our model clock might be a second, a minute, or a day, depending on how quickly we see the modelled behaviour happening.

After a number of iterations representing passage of real time, we can stop the model clock and measure the results. We can measure how temperature has been re-distributed through the water body, or how many barrels of oil have been produced.

Monitoring the real-life entity that our model represents, we can compare results for the simulated time period. If the model does a good job of reproducing the real water temperatures or the actual number of barrels of oil produced, we have a good history match.

If the match is not good, we need to go back to our model to make changes that will align model behaviour with what we observe in reality. I recently worked to adjust the geological representation of reservoirs in a gas field model to improve history-matching of gas production volumes.

Once we have made sufficient changes to achieve a good history match, we apply our model to test what might happen in the future, particularly when we change some elements of the model.

For example, if we allow a shipping dock to be built in the bay, how might that affect future water salinities, sediment load, and temperatures? If we drill a new production well, how much more oil might the reservoir produce? We can perform all sorts of thought experiments by manipulating the model in order to gain understanding of what might result from various approaches – before we actually take action.

Building climate models 

We can apply these principles and practices to model climate. To create the simplest static model, we subdivide the atmosphere over the model region into cells – each perhaps a square kilometre in area and 100 metres high.

We assign initial conditions such as temperature, humidity, wind speed, cloud cover, and other measurements, and distribute them to each cell. We advance the dynamic model clock one tick at a time, allowing dynamic relationships to re-distribute parameters throughout the model. We history match historical weather records to assess how well our model works.

Then the fun begins. What happens if we change distribution of, let’s say, heat by varying cloud cover, particulate pollution, or concentration of greenhouse gases in our model atmosphere? Will we see fewer or more heat waves? Droughts? Floods? Extreme cold? The possibilities seem endless.

As our models become more complex and more realistic, each time we run the model simulation, we will get different results, as many processes are based on probabilities, not certainties. Not every heavy rainfall will cause a flood, but some will. In fact, we usually run a model many times to see what the range of outputs might be, and then choose the model “result” as the output that occurs most commonly out of, say, 1,000 model runs.

Most climate models address very specific issues (Real climate science – uncertainty and risk).  General circulation models (GCMs) attempt to model global climate behaviour, but even GCMs generally target limited subsets of climate issues – simply because modelling climate over the entire Earth is an impossibly huge task.

What do models actually tell us about the future of climate change?

Figure 1 — Schematic representation of a General Circulation Model (from https://en.wikipedia.org/wiki/General_circulation_model

What do climate models actually tell us?

So we are back to the original question. Climate models are useful tools to help science better understand elements of extremely complex climate systems. They are mathematical constructs, built using sophisticated algorithms that are constantly being improved. Highly competent experts use them to address specific questions, and some researchers compile results of many model studies to draw general conclusions about climatic trends.

But we need to view all of this great scientific work through a lens of reality. An adage attributed to statistician George Box that most modellers respect is: “All models are wrong, but some are useful.” In other words, even the most sophisticated computer models are poor approximations of reality, but the best ones get enough things right that we can use them to test hypotheses. And we need to appreciate limitations of even the best models in order to use them properly.

What are climate model limitations? Here are a few:

  • Incorrect assumptions and approximations – every model is built on many assumptions and approximations, and some are inevitably inadequate or even wrong. Representing a complex physical system using cells with uniform properties throughout is computationally necessary but very much an approximation of reality.
  • Scarce data – there are only a certain number of measuring devices, whether weather stations or satellites, and they measure only so often. Hard data is almost always a model limitation, regardless of what we are modelling.
  • Poorly understood relationships – as one example, we do not really know how gases in the atmosphere — including water vapour, carbon dioxide, and methane — interact to trap heat, and how certain feedbacks such as cloud formation modify those relationships.
  • Uncertainty of history matches – just because our model has matched certain historical trends reasonably well does not mean it will always predict future changes accurately.  Sometimes a poor model gives one right answer for the wrong reasons.
  • Choosing the “right” model output from a big range of results.

What should we conclude? Two things that all sophisticated model-makers and users always conclude:

  1. Good models represent reality sufficiently well to test hypotheses and provide directional guidance. Our bay model may suggest that building a big dock will not drastically influence environments in the bay. Our oilfield model may suggest that drilling more wells can be justified by the increased amount of oil produced. Our climate model may suggest that adding greenhouse gases to the atmosphere may result in rising global temperatures, or more frequent hurricanes, or fewer droughts. All of these conclusions may be right, and all could be wrong.
  1. Mathematical models cannot predict the future in detail. There may be more hurricanes, but we don’t know where they will hit, or when. Temperatures may rise, but we cannot be sure where, or how fast, or what effects will result. Or … maybe there will not be more hurricanes, and maybe temperature increases will be limited by not-yet-modelled feedback processes.

When policy-makers design policies that rely on good climate models, they are relying on estimates of what future climate may bring. They are not relying on hard data, or established facts – and that is important to consider when trying to strike the right balances to benefit humanity’s many needs.

Brad Hayes
Brad Hayes
Brad Hayes has a PhD in geology from the University of Alberta and is president of Petrel Robertson Consulting Ltd., a geoscience consulting firm addressing technical and strategic issues around oil and gas development, water resource management, helium exploration, geothermal energy, and carbon sequestration. He is an adjunct professor in the University of Alberta Department of Earth and Atmospheric Sciences.

1 COMMENT

spot_img

BIG Wrap

General killed in Moscow a legitimate target, says Ukraine

(BBC News) A high-ranking general in the Russian armed forces and his assistant have been killed in Moscow by Ukraine's security service, a Ukrainian...

Iran pauses controversial new dress code law

(BBC News) Iran's National Security Council has paused the implementation of the controversial "hijab and chastity law", which had been due to come into...