If you are familiar with the journals in your field of research, you probably have a pretty good idea of which ones are the most prestigious, the ones that will get the most attention for your article. But sometimes a researcher wants a quantitative, objective measure of journal impact.
So, Dr. Eugene Garfield, an information scientist, defined the Impact Factor as follows:
To calculate the Impact Factor of a given journal for a particular year, say, 2019
Impact Factor (2019) = (# of cites in 2019 of articles published in 2017 and 2018)
divided by (# of articles published in 2017 and 2018)
The impact factor has been widely used since its invention to compare journals. However, it has its flaws
Variations on the basic impact factor have been devised to try to remedy some of these limitation. For example, the Eigenfactor score normalizes the impact factor relative to other journals in the same discipline. The Immediacy Index focuses on recent citations of recent papers. Still, the basic impact factors remain the most popular measure of journal impact.
Created by Dr. Eugene Garfield at the Institute for Scientific Information (ISI), now owned by Clarivate Analytics, Journal Citation Reports takes citation data from the Web of Science databases, and enables you to analyze journal impact in a number of ways. You can view Impact Factor, Immediacy Index, Eignfactors, as well as raw citation and article count numbers for thousands of journals for over 20 years. You can look up an individual journal, view tables of all the journals in a given subject category, do bar chart comparisons of pairs of journals, or view group data for a subject category as a whole.
Within the limitations of impact factors mentioned above, JCR is the most powerful tool for impact factor analysis. It is also integrated tightly with other Clarivate products like Web of Science to make impact factor data available where you need it.
This is a tool to rank the work of individual researchers, but it can also be applied to the output of journals or institutions as well.. First proposed by J. E. Hirsch in "An index to quantify an individual's scientific research output", PNAS, 102, 16569-16572 (Nov. 15, 2005), it defines the h-index as the number of papers "h", which each have at least "h" citations in the literature. For example, compare two hypothetical researcher:
J. Eminent Researcher Brilliant Q. Scientist
Most cited paper: 110 citations Most cited paper: 1001 citations
Paper 2: 105 citations Paper 2: 100 citations
Paper 3: 100 citations Paper 3: 50 citations
Paper 4: 95 citations Paper 4: 20 citations
Paper 50: 50 citations Paper 5: 5 citations
So, Dr. Researcher would have an h-index of 50, while Dr. Scientist would have an h-index of 5. The theory is that, while the latter had one really highly cited paper, their whole body of work is not as impactful as the former, who has a lot of highly-cited papers.
There is a flaw in the h-index, even if you accept the basic theory. No one's h-index can be higher than their total number of papers. So, someone like the Nobel prize winning physicist, Richard Feynman, who only wrote a handful of papers would not have a high h-index, even though each one of his papers is considered a classic in the field.
Web of Science currently provides h-index values for individual authors. Google Scholar metrics, as noted above, provides h-index values for journals. You can also calculate h-indexes yourself in any database that indexes cited references, and allows you to sort results by times cited. Just go into the database and do a search on the author/institution/journal you wish to rank. Then, sort the resulting list of papers by times cited. Count down the list of papers until you come to the one where he Nth paper has N or fewer citations. N is the h-index for that author/institution/journal. Note that for a prolific, highly-cited author, you may need to dig deep into the list to find that Nth paper!