Why the Impact Factor of scientific journals is deeply flawed.

The # 1 thing that scientists are judged by in academia is the impact factor of the journals they publish in. For those readers not in academia; the impact factor of a scientific journal, is a number that is based on the number of times that the articles in a certain journal are cited by other articles. Backtrack: Scientific knowledge is built little by little as the scientific community adds building blocks to the foundation laid out by previous scientists. We rely on a solid foundation and keep adding new blocks to the structure. In this way, we keep growing our knowledge as a community. We publish our work in scientific journals so that others can keep building on our work. So in every article there will be a list of references to the articles that make up the foundation of the work described in the new article. We cite each other’s articles to show that we understand and add on to previous work (and to a certain extend to please the egos out there, but I’ll let that pass for now). Usually, the more times an article is cited, the bigger an impact it has had on the scientific community and our common knowledge. Enter the impact factor.

On the surface this might seem like a good idea. A way of sorting through the thousands of scientific journals and making sure you only read – or publish in – the ones publish important stuff. However, the impact has some very significant statistical errors that in many ways make it more harmful than beneficial to the individual scientist and to the community. Every journal has an impact factor that is calculated by taking the number of times an article from that journal was cited in the literature in the previous two years, and dividing it by the number of articles published in that journal the previous two years.

  • For example: A journal published 1000 papers in 2010 and 2011 combined. Those papers together were now cited 2000 times in total in 2012 and 2011.

        The impact factor for that journal would be: 2000/1000=2.
In other words, the impact factor is an average.
It represents the average number of times an article in a given journal has been cited. This is where the trouble with the impact factor lies. An average is a very useful number, if the values that it is based on are scattered around it in a bell-shapes manner. However, the number of times articles from a given journal are cited by no means form a bell-shaped curve. They are scattered all over the place. Most articles will be cited 0, 1, or 2 times, and a few will be cited twenty times or a hundred times. And those few papers will pull up the average for all the others although they have nothing in common whatsoever (besides the journal they are published in). I do have some data to back up this claim, and as you can see in these figures below, there is no correlation between impact factor (x-axis) and number of citations (y-axis).
Picture
BMJ VOLUME 314 15 FEBRUARY 1997
So why are we all so obsessed with the impact factor? Well, as beautifully put by professor at UBC, Dr. Michael Blades in a recent talk: "As long as there are people out there who judge our science by its wrapping rather than by its contents, we cannot afford to take any chances".
Journal editors are not stupid.
They know their journal is judged by its impact factor, so they will jump through all kinds of hoops to up their factor. Citing articles from your own journal is a common trick, and researchers know that. Many researchers will customize their reference list to the journal that they are trying to get published in. A slightly more advanced way of boosting your journal’s impact factor, is by publishing a review of the specific discipline of the journal at the end of the year, in which all the articles published in the journal are cited. In fact, every year in December the Journal of Raman Spectroscopy publishes their “Recent advances in linear and nonlinear Raman spectroscopy“-review. In 2011 this special edition cited 266 published in the Journal of Raman Spectroscopy in 2010 (plus 3 more from the 2008 and 2009). For a journal that size this significantly pumps up their impact factor, and they are in fact the #1 ranked journal in their research field (based on impact factor), which they proudly advertise on their website. Now, I am not trying to put down this particular journal, they are just playing the game. I am just questioning whether the game is helpful to the scientific community.
The impact factor has become the single defining number for a scientist’s career. When applying for a job as professor in any academic institution, the impact factor of the journals you have published in is the most important number in your application. Not the number of publications, how many patents you have generated, your engagement in the research community by science outreach, number of conference talks given, graduate students you have successfully supervised,  research chairs you hold or even teaching ratings. None of these numbers are even close to being as important as the impact factor. And that is a problem!

Besides making no sense from a statistical standpoint, the impact factor has many other limitations. One of them being language barriers. An English language based journal is more likely to have a high impact factor, because more people can read it. However, some areas of research are very country-specific, and the actual impact of a paper in a non-English language is not reflected in the overall impact factor of the journal it is published in. As an example, I can mention a drug that is currently only approved for cancer treatment in China (I can’t mention the name however, for intellectual property rights reasons). Because it is only on label in China, most of the clinical trials are published in Chinese journals and are in Chinese. Since this particular drug is now being studied fairly extensively in North America, the Chinese clinical trials have had a high impact on the general research of this drug. However, their impact factor is very low, because they are published in Chinese journals which overall are not cited very often due to the language barrier.

Thus, the impact of the specific articles is really high, but the impact factor associated with them is really low.
Furthermore, the impact factor is based on total citations to a journal in a year, and does not distinguish between whether the cited articles are original research papers or reviews. The latter will usually get cited a lot more, and reviews are thus another way of boosting an impact factor, without it having any relevance to the other papers published in that journal. They just benefit from the increased impact factor, although the impact of the paper is neither increased nor decreased.

Recently, the Eigenfactor® Score was developed at University of Washington by Jevin West and Carl Bergstrom and is calculated by and freely accessible at eigenfactor.org. The intend is to judge the total importance of a scientific journal by taking into account not only the number of citations but also the origin of the citations. The Eigenfactor score comes with its own set of limitations, e.g. it is greatly influenced by the size of a journal, but it is an attempt to overcome the many problems associated with the impact factor.

I could come up with a lot more examples of why the impact factor often gives a skewed view of the importance and impact of your work, but I will just end this rant with the words of Dr. Michael Blades at a talk that he gave recently at UBC:
“If you use the impact factor to validate the importance of your work or to judge other researcher’s work, you are statistically illiterate” 

Anne Steino.
 


Comments

ecift
03/14/2014 8:59pm

Its very good and very excellent article. i really enjoyed this post and i hope you will keep posting this kinds of post in future.

Reply
03/14/2014 8:59pm

I have been examinating out some of your stories and i can state pretty nice stuff. I look forward your next article.

Reply
03/22/2014 2:43am

View the profiles of people named Timur Sener on Facebook. Join Facebook to connect with Timur Sener and others you may know. Facebook gives people the ...

Reply
Anne
03/23/2014 10:13am

Thanks for your nice comments ecift. I do my best.

Reply
04/02/2014 11:10pm

Scientific journal is always good and if you are a good writer you can make your own world of thoughts. I agree with you on these things and these re good helpful to know and work out. If you are doing your work in a better way there is nothing better than you.

Reply
05/06/2014 10:28pm

A journal's Eigenfactor score is measured as its importance to the scientific community. Scores are scaled so that the sum of all journal scores is 100. In 2006, Nature had the highest score of 1.992.

Reply
05/19/2014 12:58am

Great post, just what i was looking for and i am looking forward to reading your other posts soon!

Reply



Leave a Reply

    Author

    I am a biochemist, currently working for a start-up biotech company in Vancouver. After hours I write and edit science stories for a variety of magazines, newsletters and health authorities throughout Canada. In this blog I mainly write about the challenges and debates that permeate the research community within academia and in industry.
    As coordinator for the ScienceOnlineVancouver meetings, active member of the Canadian Science Writers' Association, and participant in the Banff Centre Science Communications program 2013, I am also deeply involved in the Canadian science communications community, and often blog about matters of #scicomm. 
    Opinions are my own.

    Archives

    April 2014
    October 2013
    June 2013
    April 2013
    February 2013
    January 2013
    November 2012
    October 2012
    August 2012
    July 2012
    June 2012
    May 2012
    April 2012
    March 2012

    Categories

    All
    Climate Change
    Postdoc
    Research
    Science Communication
    Scienceonlinevancouver
    Ubc