A “tell” for researcher innumeracy?

Evaluating scientists is hard work. Assessing quality requires digging deep into a researcher’s papers, scrutinising methodological details and the numbers behind the narrative. That’s why people look for shortcuts such as the number of papers a scientist has published or the impact factor of the journals published in.

When reading a job or grant application, I frequently wonder: Does this person really take their data seriously and listen to what it’s telling them, or are they just trying to churn out papers? It can be hard to tell. But I’ve noticed an unintentional tell in the use of numbers. Some people, when reporting numbers, habitually report far more decimal places than are warranted.

For example, Thomson/ISI reports its much-derided journal impact factors to three decimal places. This is unwarranted, an example of false precision, both because of the low counts of article numbers and citations typically involved, and because their variability year to year is high. One decimal place is plenty (and given how poor a metric impact factor is, I’d prefer that impact factor simply not be used).

When I see a CV with journal impact factor reported to three decimal places, I feel pushed toward the conclusion that the CV’s owner is not very numerate. So the reporting of impact factor is useful to me; not, however, in the way the researcher intended.

I don’t necessarily expect every researcher to fully understand the sizes, variability, and distribution of the numbers that go into impact factor, so I’m more concerned by how researchers report their own numbers. When to report all the decimal places calculated can be a subtle issue however, as full reporting of some numbers is important for reproducibility.

Bottom line, researchers should understand how summaries of data behave. Reporting numbers with faux precision is a bad sign.


For references on the issue of the third decimal place of impact factor:

UPDATE 8 May: Read this blog on the topic

Bar-Ilan, J. (2012). Journal report card. Scientometrics, 92, 249–260.

Mutz, R., & Daniel, H. D. (2012). The generalized propensity score methodology for estimating unbiased journal impact factors. Scientometrics, 92, 377–390.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s