Skip to Main Content

Publishing Resources for STEM Authors: Evaluating Impact

A guide with resources to accompany the workshops on publishing for gradute students and post-doctoral researchers.

Evaluating Impact of Journals

Why measure journal impact?

If you are familiar with the journals in your field of research, you probably have a pretty good idea of which ones are the most prestigious, the ones that will get the most attention for your article. But sometimes a researcher wants a quantitative, objective measure of journal impact.

  • If you're getting into a new area of research, you may not be familiar with all the possible journals you might want to publish in.
  • Librarians use impact factors to help make decisions on journal purchases and cancellations.
  • When people outside your field are evaluating your research, they may not know enough about the field to read your papers and decide on their value. In that case, they may decide on the basis of the prestige of the journals in which you have published. Not knowing the field, they need numbers to compare. Such persons may include:
    • Grant review panels at funding agencies
    • Promotion and tenure panels at universities
    • Hiring and search committees
    • "Green card" and work visa issuing officials

How to quantify impact?

  • Usage - This was hard to measure in the days before e-journals, but now it is relatively easy for publishers to measure and report statistics on the number of views or articles, or downloads of articles. However, these measures are a function of how many articles are published as well has how impactful they are. A small journal that is very selective and publishes highly influential articles might not do as well as one that publishes a huge volume of papers.
  • Total Citations - Since the 1960s, some indexers have tracked cited references. If a journal's articles are frequently cited, that is a possible measure of impact. But, again, this raw number doesn't take into account how many articles were published.

So, Dr. Eugene Garfield, an information scientist, defined the Impact Factor as follows:

To calculate the Impact Factor of a given journal for a particular year, say, 2019

Impact Factor (2019) =    (# of cites in 2019 of articles published in 2017 and 2018)

                                          divided by (# of articles published in 2017 and 2018)

The impact factor has been widely used since its invention to compare journals. However, it has its flaws

  • Can't compare across discipline - Subject areas where research is dependent on the very latest information, for instance molecular biology and nanotechnology, have higher impact factors than journals in subjects where older literature remains perpetually relevant, for instance geology and mathematics.
  • Can't compare review articles to original research - On average, review articles are cited more often that original research articles, so journals that specialize in reviews tend to have higher impact factors than you might expect.
  • Established journals > new journals - Because of the natural time lag between publication of an article and its citation in other articles, new journals take time to build up the "track record" needed for a high impact factor, even if they are publishing outstanding articles from day one.
  • Citation count alone doesn't tell you why something is being cited - An article whose conclusion are refuted by later research may be highly cited..but in a negative way. You can't tell that from its "impact factor".

Variations on the basic impact factor have been devised to try to remedy some of these limitation. For example, the Eigenfactor score normalizes the impact factor relative to other journals in the same discipline. The Immediacy Index focuses on recent citations of recent papers. Still, the basic impact factors remain the most popular measure of journal impact.

Where can you find impact factor numbers?

Created by Dr. Eugene Garfield at the Institute for Scientific Information (ISI), now owned by Clarivate Analytics, Journal Citation Reports takes citation data from the Web of Science databases, and enables you to analyze journal impact in a number of ways. You can view Impact Factor, Immediacy Index, Eignfactors, as well as raw citation and article count numbers for thousands of journals for over 20 years. You can look up an individual journal, view tables of all the journals in a given subject category, do bar chart comparisons of pairs of journals, or view group data for a subject category as a whole.

Within the limitations of impact factors mentioned above, JCR is the most powerful tool for impact factor analysis. It is also integrated tightly with other Clarivate products like Web of Science to make impact factor data available where you need it.

Alternatives to JCR impact factors

  • Google Scholar Metrics (https://scholar.google.com/citations?view_op=top_venues&hl=en_) ranks journals using the h-index (see below for explanation). You can view either an overall ranking of journals, or by broad subject groups. These numbers are calculated from the citation data in Google Scholar. Google Scholar Metrics are available free of charge
  • SCImago Journal Rankings (SJR) (https://www.scimagojr.com/journalrank.php) -SCImago ranks journals using citation data, but it weights the citations based on the ranking of the journal in which the citation appears. This is done using the same algorithm that Google uses to rank webpages. In addition to the SJR, the site provides a journal h-index, articles published and citation counts. Results may be filtered by broad subject categories, narrow subject categories, country of journal publication, or type of source. SJRs are available for each year from 1999 to 2019 (at present). You can also limit results to open access journals, if desired. SCImago Journal Rankings are available free of charge.
  • Scopus Sources (https://www.scopus.com/sources) The Scopus Sources list provides data on journals indexed by Scopus, Elsevier's multidisciplinary index to the journal literature. Its rankings use Source Normalized Indexed per Paper (SNIP), which uses a similar principle to the Eigenfactor Score to normalize impact measurements by the subject area of the journal. Scores are based on a four-year sampling (e.g. 2016-2019 for the most recent year). The list may be searched by subject, title, publisher or ISSN, and filtered to open access journals or type of publication. Though Scopus itself requires a subscription (and UCSB does not subscribe), the Scopus Sources list is freely available.

h-index

This is a tool to rank the work of individual researchers, but it can also be applied to the output of journals or institutions as well.. First proposed by J. E. Hirsch in "An index to quantify an individual's scientific research output", PNAS102, 16569-16572 (Nov. 15, 2005), it defines the h-index as the number of papers "h", which each have at least "h" citations in the literature.  For example, compare two hypothetical researcher:

J. Eminent Researcher                                                            Brilliant Q. Scientist

Most cited paper:     110 citations                                                    Most cited paper:     1001 citations

Paper 2:                     105 citations                                                   Paper 2:                     100 citations

Paper 3:                     100 citations                                                   Paper 3:                     50 citations

Paper 4:                     95 citations                                                     Paper 4:                     20 citations

Paper 50:                   50 citations                                                     Paper 5:                     5 citations

So, Dr. Researcher would have an h-index of 50, while Dr. Scientist would have an h-index of 5. The theory is that, while the latter had one really highly cited paper, their whole body of work is not as impactful as the former, who has a lot of highly-cited papers.

There is a flaw in the h-index, even if you accept the basic theory.  No one's h-index can be higher than their total number of papers. So, someone like the Nobel prize winning physicist, Richard Feynman, who only wrote a handful of papers would not have a high h-index, even though each one of his papers is considered a classic in the field.

Finding h-index values

Web of Science currently provides h-index values for individual authors. Google Scholar metrics, as noted above, provides h-index values for journals. You can also calculate h-indexes yourself in any database that indexes cited references, and allows you to sort results by times cited. Just go into the database and do a search on the author/institution/journal you wish to rank. Then, sort the resulting list of papers by times cited. Count down the list of papers until you come to the one where he Nth paper has N or fewer citations.  N is the h-index for that author/institution/journal. Note that for a prolific, highly-cited author, you may need to dig deep into the list to find that Nth paper!

Other factors in deciding where to publish

What journals specialize in your topic?

  • Where has your advisor, your co-authors or your research group previously published?
  • Where were the articles that you are citing in your paper published?
  • Do a search on your topic in an appropriate database and analyze by journal title.  Many scientific databases allow you to analyze or refine the results of a search by journal title or publication title. They will usually rank the results by how many articles in the answer set appeared in the journal. Before analyzing, you may want to limit your answer set to the past five years in order to pick up on current trends.
  • Do a search on your topic and see where the most highly cited articles were published. If an appropriate database for your field allows you to sort results by times cited. This can easily let you see where the most influential articles on your topic are being published. Remember that review articles will skew toward the top of the list, and, that the very most recent articles may not have had time to accumulate as many citations as they will eventually earn.
  • EndNote Match The Match tool in EndNote online allows your to enter the title and abstract of your paper. It will then search against the Web of Science database and present you with a list of journals in which similar articles have been published.

How important is open access to you? This can drastically affect your choice of journals.

  • Do you want to publish in a preprint server first? In some fields, notably physics, posting articles in a preprint server (arXiv) is standard practice, and virtually all physics journals will then accept arXiv articles for publication there. Acceptance of preprint servers is not as widespread in other fields, however. Check our list on the Preprints page to see if there is a server where you think your work would fit, then check the policies of the journals you might want to publish in to see if they will accept submission of articles that have already appeared on a preprint server.
  • Are there mandates that you have to follow? Some institutions have mandates that require some kind of open access to all articles published by their researchers. The University of California, for example, requires deposition of manuscripts in the e-Scholarship repository. Does the journal you wish to publish in allow deposition in repositories? Do they require an embargo, that is a delay in availability of the repository manuscript until some time after the paper is published by them?  Perhaps even more importantly, does one of your funding agencies have an open access requirement of some kind? Some funding agencies, especially in Europe, now require that articles be published only in fully open access journals.
  • Do you have the money to pay the author publication charges (APC) for the journal you are interested in? APCs can vary widely from one journal to another. Check to see if your research grant includes a budget for APCs.  You may have access to local resources to subsidize APCs, like the discounts UC authors get from some publishers or the UCSB Open Access Publishing FundNote that the OA Publishing fund requires not merely that the journal be fully open access, but that it appears in the Directory of Open Access Journals (DOAJ).