Laurie Weston recently wrote a series of three very cogent articles in BIG Media addressing science – the scientific method (Science – there is method to the madness), the manipulation of science for various causes (Manipulating science – activism and advocacy), and moral and ethical issues around the practice of science (Science and morality – ethics and judgment).
Often now when I read blogs, social media posts and even mainstream media, I think “this author needs to read Laurie’s articles”. It appears that few people are really literate when it comes to science; even some that have scientific training exhibit a fundamental lack of understanding of the deeper principles and ideas that Laurie laid out.
When somebody says, “well, I believe the science” in a discussion, it almost always means they don’t understand science, the scientific method, or how scientific results should be used. Instead, they have decided to believe simplistic statements declaring “this is the science” from their favourite authority – whether that’s David Suzuki, the Manhattan Contrarian, Al Gore, or their brother-in-law.
Many people simply do not understand the scientific concepts of hypothesis, experimentation, and interpretation. They have not been exposed to the scope of assumptions that go into formulating scientific thought and experimental design, nor the huge uncertainties that arise from the limitations of measurement and statistical inference.
In fact, many are stuck in the grade-school white lab coat visualization of science – just titrate this, measure that, run it through the computer, and voila – a scientific result you can take to the bank or use to bad-mouth opponents!
The most egregious public misunderstandings of science centre around computer modeling. Many people seem to believe that computer models deliver highly accurate predictions about complex subjects, and that these predictions are suitable to drive government policy decisions with little consideration for critical societal, economic, or environmental consequences.
Scientists and engineers who build and run models ranging from climate to oil reservoirs have a completely different (and more accurate) understanding of model architecture, assumptions, simplifications, limitations, and uncertainties. They run software models for thousands of iterations to extract results most statistically significant and congruent with existing knowledge. They know that slight adjustments to any parameter or any of many equations defining relationships between model grid cells can significantly alter outcomes.
Scientists interpret useful information from model runs to test their knowledge and hypotheses, and to conduct “what if” experiments providing guidance on how certain inputs might produce particular results. What if CO2 levels were higher? What if cloud cover was greater? What if I drilled a new well in this spot? What if the rate of allergic reaction to this vaccine was higher?
It is important to understand that most models address very specific questions. There is no single model that we can run to predict what climate changes are likely to occur 20, 50, or 100 years from now. We don’t understand the data inputs or the huge number of relationships and feedback loops, nor do we have the computer horsepower to do anywhere near the number of calculations to address such a task, even if we knew what calculations to make. The current issue of the journal Nature Climate Change includes articles titled “Observational constraint on cloud feedbacks suggests moderate climate sensitivity” and “Atmospheric dynamic constraints on Tibetan Plateau freshwater under Paris climate targets”. Fine studies I am sure, but addressing very, very specific questions.
When combining reports on broad areas such as climate from researchers around the world, each addressing their own specific topics, a whole new level of assumptions, uncertainties, and potential errors is introduced. Where properly done, as in IPCC reports, conclusions are stated in appropriate scientific language. For example, from the 2019 IPCC special report on the ocean and cryosphere:
“Permafrost temperatures have increased to record high levels (very high confidence), but there is medium evidence and low agreement that this warming is currently causing northern permafrost regions to release additional methane and carbon dioxide.”
But I have read several articles claiming that permafrost melting is irreversible and is causing immense and unprecedented releases of greenhouse gases. Actual scientific reporting says no such thing, although it points out the issue and suggests that further study is justified.
Finally – the question of scientific consensus. Many people who don’t understand science seem to think that scientific results are decided by democratic votes. They are not. They are decided by cogent interpretation and reasoning of the best available data. Results are ever-changing as more data is acquired and better interpretations advanced. 97% of scientists may agree on a certain idea, but if one dissenter comes along and provides a strong argument that the idea is wrong, real scientists will examine the argument and evidence – and will change their position if the case is convincing.
The real world is complex. Science is complex as well, but generally represents our limited understanding of real-world situations and circumstances. As Laurie pointed out, science is very dynamic, and many things that we “knew” even a couple of decades ago have been demonstrated very unlikely to be true.
I will never say, “I believe the science.” What I will say is, “Here is the current state of scientific thought, and here are the weaknesses and strengths. Let’s decide if that’s good enough for now, or whether we should do more work before drawing conclusions.”
Brad; nice, tight and concise article.
Thanks George, much appreciated. I’d like to think that people are slowly waking up to these realities, but it seems to be an uphill battle in light of the stories that popular media seem to love.
When I have said in the past that ‘the science is never settled’ – many people zone out at that point. It just doesn’t make much sense to a person who doesn’t grasp the scientific process. Or maybe it’s just my way of being an obsequious dick. 😉
Unfortunately, it’s easier to get behind an alarming headline than to consider model constraints, experimental error, and sampling bias. The good news is that these are not difficult concepts to understand, given a straight forward, jargon-free, respectful explanation and a few real examples.