In an ideal world, scientific and economic studies would inform public policy to solve important social problems, including climate change. This is a tall order, though, because neither decisionmakers nor the general public have enough time and expertise to distinguish between genuinely useful studies and bad research or disinformation that some organizations spread to confuse the debate.

Indeed, we have recently seen a number of flawed studies that have nonetheless been featured in the media. One study focuses on land use for different energy sources, or “energy sprawl,” citing brochures as scientific literature, and then ignoring information contained in them. As Matt Wasson notes, this “literature review” led the authors to assume that a 2MW windmill requires almost 120 acres of lands — or more than 100 football fields.

Another study predicted that a federal renewable energy standard would carry a prohibitively high cost, assuming that (i) the cost of wind power generation does not decrease over time and (ii) the alternative is that all electricity in the United States is generated from coal. Both assumptions are clearly nonsensical, but one cannot read the absurdity off the study because they are not explicitly recognized as essential to the results.

Unfortunately, there is no magic bullet to solve this problem. Serious scientific and economic modeling is difficult, and anyone who has done it understands that the results are very sensitive to assumptions. But as I wrote above, policymakers and the general public are not in a position to evaluate the plausibility of these assumptions.

In the ideal world, the media would expend the effort necessary to identify credible studies. Obviously flawed and misleading studies could be identified as such or simply ignored as disinformation. But the media does not do this, probably because journalists are too afraid of leaving the impression that they are biased. Paradoxically, the golden rules of neutrality and balanced coverage critically hurt the ability of the media to communicate useful information from scientists and economists to different audiences.

Nonetheless, there are a few simple guidelines that an interested reader can use to evaluate the credibility of a scientific or economic study. I am not going to focus on the identity of the author (if an oil company funds a study of global warming, one may wonder why) or the outlet (if a scientific study is published as a discussion paper, one may wonder why). Instead, I am going to discuss the substance.

1) Does the study clearly specify key assumptions? A key difference between credible scientific studies and disinformation is that in scientific reporting, key assumptions must be clearly described, whereas disinformation can be described as the art of hiding them.  If the reader can see that obvious caveats are not being discussed, then the study is probably disinformation. For example, assuming that the cost of wind power generation remains unchanged for almost three decades should ring alarm bells.

2) Does the study compare the results with those from other studies? Scientific knowledge is accumulated through multiple studies, so scientific reporting entails systematic comparisons and a discussion of exactly why the results differ from other studies. This is very hard to do if the study is pure disinformation, so the authors often use a shortcut and do not conduct a literature review at all.

I view the inability of the media and other social institutions to communicate scientific and economic information as deeply troubling. Democratic deliberation must be based on actual information, otherwise it is empty of content. How can we reform the institutions that transmit this information from experts to the society?