Thursday, November 21, 2024

Science and morality – ethics and judgment

There is a common misperception that science is all about equations, formulas, and clear-cut, unambiguous measurements conducted by serious intellectuals in lab coats. In fact, there is considerable room for creativity, and, very often, non-analytical judgment and choice are involved. These choices and judgments are sometimes about right and wrong, the determination of which depends upon a certain interpretation of morality. Whose morality? What criteria? Are there lines beyond which we all agree something has crossed into immoral or unethical territory?

Science is an endeavour in which humans seek to understand our natural world and increase knowledge. Morality is based on a framework of cultural values. Ethics are the rules based on that morality, and integrity is the adherence to those ethical standards. Together, these form arbitrary restrictions on what can and should be allowable in the execution of scientific inquiry and activity.

These restrictions can be imposed at various stages:

  • permissible direction and limits at the outset of scientific investigation by governments, corporations, or other benefactors
  • limitations on the public use or adoption of scientific results, depending on the perception of public benefit
  • outright rejection of scientific conclusions that lean counter to cultural morality or perceived virtue

In this subjective context, the edict of “acceptable” results may discourage – or even prohibit – healthy scientific criticism and debate, which are essential to the scientific method. Throughout history, critics of scientific outcomes thought to be morally righteous have been labeled “frauds”, “criminals”, “heretics”, “deniers”, or some other scientifically irrelevant term, and have been ostracized for their views. See the first article in this series, “Science: there is method in the madnessfor further discussion on these ideas.

Sometimes a moral choice is obvious and transparent; sometimes it is a subtle, perhaps not even conscious, decision. Sometimes the choice is right in one person’s mind and wrong in another’s, and sometimes the choice benefits one group (or species) to the detriment of others. There is nearly always a significant philosophical grey area.

Pure power motives (e.g., corporate, political, military, or religious) can masquerade as morality, but one would hope that most moral decisions are carefully and objectively considered by the group faced with the judgment, ideally taking into account the best interests of all parties at all times.

However, these considerations are difficult to separate from the influence of cultural bias, and, quite frequently come down to personal sensitivity and emotion. Decisions of right and wrong are thus made on an individual or small-group level. Aggregate opinions can eventually get reflected in political policy or law if they become voting issues or if they (usually the particularly sensational ones) are amplified by the media, impartially or not.

But the sheer number of moral assessments with which we are faced on an almost daily basis, means not everything can be regulated. Plus, many issues are so nuanced that laws cannot possibly anticipate and articulate every conceivable situation that might arise. Therefore, the ultimate decision is often delegated to the individual or group facing the dilemma at the time.

In science, it is easy to get caught up in the excitement and worthiness of the purpose and overlook collateral damage that may seem insignificant relative to the importance of the objective. Scientists decide if the benefits of their methods outweigh any consequential harm. If so, how does one balance benefit versus harm? There is no easy answer to this question. Any attempted answer depends on point of view and may change with time, information, and awareness.

For example, testing of products on animals was viewed by many scientists as an acceptable sacrifice (of the animal) in order to eliminate or minimize adverse effects on the ultimately intended human recipients. Eventually, people who were disturbed by this practice joined forces[1] to challenge and debate what was acceptable. Public awareness increased as a result, and more ethical policies were created, proving that it was possible for solutions that respected prevailing moral standards to be no less effective.

There has been a general perception that non-human subjects are disposable in the pursuit of knowledge. Empathy has been shown to decrease relative to the length of time since evolutionary divergence.[2] More disturbing, this attitude has been extended to the poor, prison inmates, racial minorities, the mentally or physically disabled, orphans, or other groups deemed unworthy or unimportant.[3][4] These are clear examples of exploitation of a disadvantaged group.

In another BIG Media article, an investigation into the history of vaccines revealed that the first known vaccine test was done in the year 1796 by drawing fluid from a milkmaid infected with cowpox (a less serious illness linked to smallpox), injecting it into a healthy eight-year-old boy, and exposing him to smallpox six weeks later. He didn’t get the disease, but was this ethical? Probably not, by today’s standards, but it can be argued that the resulting benefit to millions of humans since that experiment was worth the risk. Hindsight, as they say, is 20/20.

Speaking of that subject, what about secondary or indirect morality? Incorrectly gathered evidence, no matter how compelling, is inadmissible in court. Is it morally acceptable to use data that has been gathered immorally? A controversial study on twins and triplets carried out in the 1960s and ‘70s would be considered unethical today, but the results may still have scientific value. Twins and triplets from an adoption agency in New York were separated at birth, without consent, to study the “nature versus nurture” argument. Neither the study subjects nor their adoptive families were informed of their siblings as they were observed.

Despite the objectionable methods, Yale University has acknowledged that the results themselves are scientifically important. They have been sealed in the Yale library archives by agreement with the study authors until 2065.[5] It is unknown why that particular time period was chosen; perhaps to allow memories of the injustice to the subjects to fade.

Nuclear tests in the 1940s through ‘90s by Britain, France, and the United States at hundreds of sites (inhabited and uninhabited) in the South Pacific caused disruption and displacement of local populations, and contamination from lingering radiation and buried nuclear waste that remains to this day.[6] The governments involved either did not know the risks or downplayed them, making a judgment call that they could justify at the time in the context of perceived military threats.

Sometimes the full impact of a choice is not clear at the time of the decision. When serious consequences come to light, a different moral choice arises – to cover up or not. Are all authentic results reported, positive or negative?[7] To address this situation, clinical trial registries encourage scientists to document their methods and report all experimental outcomes, regardless of partiality.

So far, the examples shown have highlighted the consequences of, what could be argued, immoral choices. On the other hand, moral standards that are too strict can cause undesirable consequences by hampering important scientific investigation. Women’s sexual and reproductive health has been considered a taboo subject by many cultures. As a result, preventable diseases and questionable practices were either not investigated at all, or relegated to unsupervised or clandestine pseudo-science (e.g., risky abortion methods and contraception misconceptions).

The Goldilocks morality is just right. In the second article in this Science series, Manipulating Science, terrible birth defects that resulted from the sedative (and anti-morning-sickness) drug Thalidomide[8] were mentioned. This tragedy could have been much more widespread, had it not been for Francis Oldham Kelsey, who was a medical reviewer in the U.S. Food and Drug Administration in 1960. She carefully considered the application to approve Thalidomide, but despite peer and corporate pressure, refused to consent, objecting to, among other scientific deficiencies in the drug’s development, the lack of testing on unborn fetuses. Her demonstration of incorruptible integrity, caution, and insistence on proof of safety averted the disaster in the U.S.[9] that would play out in many other countries without the benefit of this level of ethical scrutiny.

As science advances, we venture into new and previously uncharted moral dilemmas. Disciplines such as stem-cell research, genetic manipulation and “un-natural” selection, artificial intelligence, nanotechnology, and increasingly sophisticated surveillance of private activity all pose myriad moral questions. How should we address these questions? Are we comfortable leaving them up to individual scientists? Corporations? Governments? Is a democratic approach appropriate?

In the first article in this Science series, the dangers of assuming that consensus equates to truth were demonstrated. A single dissenting voice with a compelling, verifiable, data-backed argument can derail the prevailing understanding. Similarly, a simple majority vote either by experts or the public may not be the most effective method of judging right and wrong, especially since the majority might exclude those most affected by the judgment. What else then?

The jury is an example of a process designed to determine right from wrong. Rather than relying on the majority opinion, a jury representing a cross-section of society is required to come to a unanimous decision. This is much more difficult than a simple vote because all perspectives on the issue would, in theory, be explored, considered, and debated, for everyone on the jury to be convinced beyond a doubt. The “convincers” must use precedent, supporting evidence, clear connections, and logical, transparent arguments for the “convincees” to be swayed from their own convictions.

It seems logical that life-altering moral decisions about scientific endeavours should require the objective rigour of, at least an attempt at, proof beyond a reasonable doubt.

[1] https://www.peta.org/

[2] Miralles, A., Raymond, M. & Lecointre, G. Empathy and compassion toward other species decrease with evolutionary divergence time. Sci Rep 9, 19555 (2019). https://www.nature.com/articles/s41598-019-56006-9

[3] https://www.cdc.gov/tuskegee/timeline.htm

[4] https://encyclopedia.ushmm.org/content/en/article/nazi-medical-experiments?parent=en%2F135

[5] https://yaledailynews.com/blog/2018/10/01/records-from-controversial-twin-study-sealed-at-yale-until-2065/#:~:text=In%20the%20depths%20of%20Yale’s,to%20see%20their%20own%20files.

[6] https://www.futurelearn.com/info/courses/captain-cook/0/steps/55835

[7] K. Dickersin, S. Chan, T.C. Chalmersx, H.S. Sacks, H. Smith, Publication bias and clinical trials, Controlled Clinical Trials, Volume 8, Issue 4, 1987, Pages 343-353, ISSN 0197-2456, https://www.sciencedirect.com/science/article/pii/0197245687901553

[8] https://thalidomide.ca/en/what-is-thalidomide/

[9] https://www.smithsonianmag.com/science-nature/woman-who-stood-between-america-and-epidemic-birth-defects-180963165/

Laurie Weston
Laurie Weston
Laurie Weston is a co-founder and scientific strategist for BIG Media, with a Bachelor of Science degree with honours in Physics and Astronomy from the University of Victoria in Canada. Laurie has more than 35 years of experience as a geophysicist in the oil and gas industry. She is president of Sound QI Solutions Ltd., a data analysis software and services company she founded in 2007.
spot_img

BIG Wrap

U.S. vetoes UN Security Council resolution demanding Gaza ceasefire

(Al Jazeera Media Network) The United States has vetoed a resolution at the United Nations Security Council (UNSC) demanding an “immediate, unconditional and permanent”...

Germany suspects sabotage behind severed undersea cables

(BBC News) German Defence Minister Boris Pistorius has said damage to two undersea cables in the Baltic Sea looks like an act of sabotage...