“Understanding how the term junk science is used will enhance debates surrounding the science of sustainability. For by better understanding what science is, we will be better positioned to use it optimally and accurately as we seek to plot a sustainable path forward.”
That’s the conclusion of US social scientist Michael Carolan of Colorado State University in Fort Collins who has analysed a decade’s worth of print media searching for a definitive definition of the term “junk science”. The media and politicians are full of it. Legislative battles are waged over it. Regulatory agencies devote resources almost to the limit in trying to understand it and differentiate between it and genuine science. But, how do we, how can we, know that we are illuminated only by the light of reason when there are so many conflicting accounts of rationality?
It’s a noble goal – to allow important decisions to be made based only scientifically valid evidence. But, who’s to decide what’s valid and what’s junk? How can you tell, anyway? In this debate, we’re not talking about data obtained fraudulently, such as the stem cell debacle in South Korea or other deliberately falsified results. But, we could be…until the bluff is called, usually by scrupulous scientists attempting to reproduce and experiment, it is impossible to know that any superficially rational decision is the right one based on the data as it stands, peer-reviewed or otherwise. Imagine the millions of dollars that might have been wasted building cold fusion power stations over the last twenty years, for instance.
The problem, says Carolan, is that defining a “finding” as junk science relies on our having a “clear and unproblematic understanding of what science is , and just as importantly what it is not”. We might think we do. It approximates to that observation-hypothesis-prediction-experiment-new-observation-amendment-(peer review)-theory cycle with which we are all fairly familiar. But, many things we call science, such as experiments that cannot be repeated independently, the LHC experiments, large-scale clinical trials, climate modelling etc, do not fit and cannot even be forced to fit this cycle. Moreover, of the many thousands of scientific papers out there that comprise the scientific literature, very few, but for some worthy exceptions, are ever repeated by other scientists.
I recently reported on the revocation of “natural” status for a compound that was previously described as having been extracted from Antrodia camphorate. But, that’s just one small example, there tens of thousands of papers any one of which might contain an error of scientific “fact” that may never come to light but on which a small, but significant decision might be made.
Carolan writes in the current issue of IJSS how there are a limited number of definitions that can be gleaned from the media sample to define junk science. In other words, the context of why this phrase was used in a given article gives us the 11 definitions of “junk science”. A third of articles analysed over the decade did not provide any definition or qualification for using the term “junk science”.
- Bad policy based upon (8.8%)
- Experts with agendas (7.0%)
- False data (surprisingly or not surprisingly a mere 1.8%)
- No data or unsubstantiated claims (14.0%)
- Failure to cite references (5.3%)
- Using non-certified experts (3.5%)
- Poor methodology (as determined by whom?) (21.1%)
- Too much uncertainty to arrive at conclusions drawn (14.0%)
- Revealing only that data which supports findings (7.0%)
- Non-peer reviewed claims (7.0%)
Carolan, M. (2011). When does science become ‘junk’? An examination of junk science claims in mainstream print media International Journal of Sustainable Society, 3 (2) DOI: 10.1504/IJSSOC.2011.039917