Science Fiction

 
science fiction.jpeg

Science is under siege as researchers elevate the pursuit of publicity above the advancement of human knowledge. By Nick Cater.

We expected better from science in this pandemic. Billions of dollars of economic activity have been sacrificed and untold misery caused by lockdowns, a medieval approach to control a virus with no empirical underpinnings.

Surely there are more sophisticated forms of protection than face masks. A Danish study based on a sample of 6000 people, half of whom wore the mask and half of whom didn’t, suggests that face coverings make no difference. Some may want to debate the findings, but it’s hard to do so, since the Lancet and other prestigious publications flatly rejected the study for publication.

The Lancet is not having a good pandemic. In May it published a study claiming hydroxychloroquine had no clinical benefits in treating COVID-19 and was potentially dangerous. The study was quickly retracted when it was found to be based on data described by its editor as “a monumental fraud”.

It would be comforting to think this was an isolated departure from rigorous science. Sadly, it is not. COVID-19 has merely exposed a virus that has infected science for decades, eating away its integrity and weakening public trust. The scientific errors exposed by the pandemic are merely a symptom of a wider malaise to which few, if any, disciplines are immune.

Scottish psychologist Stuart Ritchie makes a forceful case as to why science needs to take a good long look at itself in his book, Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science, published in July.

The scientific method should promote scepticism, rationality and empiricism, he writes. Instead it “has become home to a dizzying array of incompetence, delusion, lies and self- deception”.

The peer review system that is supposed to protect science from human flaws amplifies them instead. The imperative for researchers to publish or perish favours quantity over quality. Of the millions of scientific papers published every year, few make a contribution to the literature.

Most go unnoticed. The worst implant errors that offer false hope or cloud our understanding of the world.

It is disappointing to learn, for instance, that red wine does not make you live longer after all, as cardiologist Dipak Das of the University of Connecticut claimed in a series of much-hyped studies.

Das was found guilty by an inquiry of 145 counts of fabrication and falsification of data, involving at least 23 papers and three grant applications. He manipulated the presentation of experiments called western blots, which assess presence and amounts of specific proteins, slicing and dicing protein bands from separate experiments to suggest they had been measured in the same experiment.

Outright scientific fraud is not as uncommon as we would like to think. Frequently, it is driven by publication bias, the imperative to produce clear results, free from uncertainty and statistical noise. Scientific journals are reluctant to publish negative results or attempts to replicate previous studies.

Ritchie labels this “the file-drawer problem”, the hiding place for null results, evidence of journeys down blind allies trying ideas that just didn’t stack up.

Ritchie exposes ways in which scientists manipulate the “p-value”, a statistical measure of the probability of getting the same result if the hypothesis wasn’t true.

A p-value below 0.5 is the threshold for publication of a paper. It encourages researchers to take short cuts running multiple data sets through the computer with no specific hypothesis in mind, then reporting whichever effects have acceptable p-values.

Outcome-switching, as it is known, is rife in nutritional epidemiology, the study of links between diet and health. One study looked at thousands of papers linking food with cancer. Of 50 types of food studies, 40 had been linked to cancer, including bacon, pork, eggs, tomatoes, bread, butter and tea. Some were said to increase the risk, others to reduce it. But in most cases the statistical significance was small.

Rarely were other possible factors considered; that both poor health and diet might be determined by cultural or socioeconomic circumstances, for example.

Bias is exacerbated by excitable press releases. A 2014 study by researchers at Cardiff University found that 40 per cent of medical science press releases contained recommendations for changed behaviour unsupported by the study.

Others made cross-species leaps, reporting studies on animals that the press release simply assumes are replicable in humans. In fact, 90 per cent of experiments with mice fail to translate to human beings.

The researchers found claims exaggerated in press releases were inflated further by journalists. “In an age of ‘churnalism’, where time-pressed journalists often simply repeat the contents of press releases,” writes Ritchie, “scientists have a great deal of power – and a great deal of responsibility.”

Perhaps we need to make science boring again, recognising that science does not change through a series of breakthroughs but through the slog of trial and error. “If all you do is ground-breaking, you end up with a lot of holes in the ground but no buildings,” says Ritchie.

The roots of the crisis in empiricism run deep. The cult of post-modernism propagating the view that there is no objective truth accounts for some of the rot.

A more direct source for blame, however, are the perverse incentives that drive scientists to publish, not for the advancement of human knowledge, but the advancement of their own careers and the acquisition of research grants.

Rating scientists on their total number of publications and citations drives a system that is too easily gamed. It is a classical example of Goodhart’s law, named after British economist Charles Goodhart, that when a measure becomes the target, it ceases to be a good measure.

The same unreliable metrics are used to evaluate universities and university publications, further incentivising those in the system to publish and be damned.

University rankings rely heaving on measures of research income “productivity” and citations.

COVID-19 could have been science’s moment of salvation, an opportunity to put aside past mistakes and join forces in the interest of mankind, inspire a generation of students to take up science courses and fight for a noble cause.

Instead, science has slipped further into the swamp, polarised and tainted by its own prejudice, drifting ever further from the Mertonian principles of universalism, disinterest and organised scepticism.