“
As it happens, there’s a way of presenting data, called the funnel plot, that indicates whether or not the scientific literature is biased in this way.15 (If statistics don’t excite you, feel free to skip straight to the probably unsurprising conclusion in the last sentence of this paragraph.) You plot the data points from all your studies according to the effect sizes, running along the horizontal axis, and the sample size (roughly)16 running up the vertical axis. Why do this? The results from very large studies, being more “precise,” should tend to cluster close to the “true” size of the effect. Smaller studies by contrast, being subject to more random error because of their small, idiosyncratic samples, will be scattered over a wider range of effect sizes. Some small studies will greatly overestimate a difference; others will greatly underestimate it (or even “flip” it in the wrong direction). The next part is simple but brilliant. If there isn’t publication bias toward reports of greater male risk taking, these over- and underestimates of the sex difference should be symmetrical around the “true” value indicated by the very large studies. This, with quite a bit of imagination, will make the plot of the data look like an upside-down funnel. (Personally, my vote would have been to call it the candlestick plot, but I wasn’t consulted.) But if there is bias, then there will be an empty area in the plot where the smaller samples that underestimated the difference, found no differences, or yielded greater female risk taking should be. In other words, the overestimates of male risk taking get published, but various kinds of “underestimates” do not. When Nelson plotted the data she’d been examining, this is exactly what she found: “Confirmation bias is strongly indicated.”17 This
”
”