Springe direkt zu Inhalt

Bibliometrics

Bibliometrics is the measurement of scientific output based on publications and the interpretation of the results. The basis for bibliometric analyses is mass data provided by citation databases such as Web of Science and Scopus.  Mathematical and statistical methods are used for their evaluation. A bibliometric analysis does not provide a qualitative assessment, but a quantitative measurement of publications. Thus, bibliometrics is a subfield of scientometrics, the measurement of scientific output.

Scientific findings are usually published as papers in the form of scientific articles, lectures and books, or as data. In turn, other researchers can refer to these publications in their publications through citations. Regardless of whether scientific results are received positively or negatively, the citations related to them represent a measure of resonance.

The most common basis of bibliometric indicators is therefore the relationship between the number of publications and the number of citations, for example of a researcher or a journal. As scholarly communication continues to evolve, alternative metrics, known as altmetrics, have been developed. These rely on an immediate evaluation, including recent publications, and also incorporate new media such as Twitter and blogs into the evaluation.

However, bibliometric indicators are often misinterpreted and used inappropriately in academic performance evaluation, which is why they are increasingly viewed skeptically in the scientific community. Alternative metrics indicators are also criticized for their manipulability. Nevertheless, it is advantageous for scientists to deal with bibliometric indicators because it enables them to properly classify the response to their own publications.

BerlinUP offers advice if you have questions about bibliometric indicators and their implications.

The term author-related indicators covers all indicators that exclusively attempt to quantify the scientific work of individuals. These can help answer the following questions, for example:

  • How many publications have I published as an author in a certain period of time?

  • How often have I been cited?

  • How can the scope and influence of my scientific findings be increased or strengthened?

Some author-related indicators are explained below.

Number of publications

Represents the absolute number of published documents in a defined time window.

Number of citations

Represents the absolute number of citations of defined publications in a defined time window.

Citation rate

The citation rate measures the number of citations in relation to the number of publications. It thus indicates how often a person's article is cited on average. Any publications that are outliers are heavily weighted here.

h-index

The h-index, also known as the Hirsch index, represents a ratio between the number of publications, usually of a person, and their citations. The h-index of a person can be at most as large as the number of publications of the person, but only if these were cited in each case at least as often as the total number of publications. For example, if a person has two publications cited twice each, the person's h-index is 2. If both publications are cited three times each, the index remains at two. Only after a third publication and at least three citations of each of the three publications does the h-index increase to 3. The higher the h-index of a person, the greater the number of frequently cited publications of this person. Therefore, once an h-index is reached, it remains at least the same, but cannot increase by further publications alone. Moreover, individual outlier publications that are cited more or less frequently do not carry much weight. The h-index is mostly applied to individuals, but can also be applied to groups of individuals, facilities, and institutions. The h-index is criticized as not sufficiently meaningful because it measures productivity instead of quality. Moreover, it can vary depending on the data base on which it is measured.

Field-Weighted Citation Impact (FWCI)

This indicator is used for both individual publications and authors and takes into account discipline-specific publication cultures. FWCI is the ratio between the actual number of citations a publication has received to date and the expected number for a publication with similar characteristics. The expected number refers to the average number of citations over the past three years for all issues of the same age, document type, and discipline. The FWCI was introduced by Elsevier's Scopus database (see below).

The relevance of publication journals can vary widely across research disciplines. Even within a discipline, not all journals are consistently relevant to all researchers. This is where journal-related indicators come in. These should help to answer the following questions individually:

  • In which journal should I publish?

  • Which journal is particularly relevant in my discipline?

Journal Impact Factor (JIF)

The Journal Impact Factor (JIF), often just called the Impact Factor (IF), is calculated by dividing the total number of citations in a journal from two years by the number of articles published in the journal in those two years. If a journal published a total of 100 articles in 2020 and 2021 that were cited a total of 200 times, the journal's JIF in the reference period is 2. A JIF can only ever be calculated for completed volumes. This means that the JIF of a journal given in 2022 is calculated from the two previous volumes.

The JIF is often mistakenly equated with the research performance of the publishing authors, although it is a mere journal indicator. Moreover, a publication in a journal with a high JIF is not indicative of the scientific quality of the publication itself. This was also never the purpose of the JIF, which was originally developed by Eugene Garfield at the Institute for Scientific Information with the aim of helping libraries select journals to subscribe to given limited resources. First calculated for a selection of journals in 1975 and published as part of the Journal Citation Reports (JCR), both the JIF and the JCR are now a product of the Clarivate Analytics company.

Source Normalized Impact per Paper (SNIP)

SNIP relates the average number of citations a journal receives to the average number of citations in its respective discipline (the citation potential). A SNIP of 1 means that a journal is cited according to the average of the corresponding research field. A SNIP of 2 means that articles in the journal are cited twice as often on average. The goal of the SNIP is to make the coverage of publications within different disciplines comparable. The SNIP is issued by Elsevier via their database Scopus (see below).

The perception of scientific publications no longer depends solely on the response in purely scientific publication organs. Research content is increasingly cited and discussed in new media, such as Twitter or science blogs. Researchers themselves draw attention to their research in social media or are invited as guests on podcasts. Through these channels, a discussion of research content and its methodology can be discussed with a wider audience beyond the peer-review process. It also can increase the resonance and outreach of research content.

The scope of attention in newer media can be measured, for example, by the number of clicks or downloads, but also by mentions or bookmarks set. These alternative metrics (altmetrics) are processed by different providers, such as Digital Science (Altmetric) or Elsevier.

Alternative metrics are not a substitute for traditional bibliometric metrics, but can be used to supplement them. Again, a high number of mentions does not necessarily equate to a high level of quality.

There are a variety of data sources for calculating bibliometric metrics, which can be used differently depending on the provider. Although different databases often offer the same metrics, the results differ in part due to the divergent mode of operation and calculation by the providers as well as the divergent data basis. Therefore, in any bibliometric analysis, it is important to identify the data sources used.

Scopus

Scopus is a multidisciplinary database from the Elsevier company that consists largely of a curated list of journals and publications that are reviewed against quality standards before inclusion. The subject areas covered are still heavily weighted toward the natural sciences, with just under a third of the content in the social sciences.

Web of Science

Web of Science is a multidisciplinary database from the Clarivate Analytics company that is composed of several indexes. These indexes consist of a curated list of journals and publications that are screened for inclusion against quality standards. Although the focus of the disciplines covered remains on the natural sciences, Web of Science is steadily expanding its coverage to include the social sciences, arts, and humanities.

Dimensions

Dimensions is a multidisciplinary database from the Digital Science company that ingests metadata from open-access online sources and then augments the database with licensed content directly from publishers. The Dimensions platform can also be used as a bibliometric assessment tool, which distinguishes it from Web of Science and Scopus, which primarily offer bibliographic data with limited analytical tools. Dimensions offers partial free access to its system, as well as non-commercial access to its data via an API interface.

OpenAlex

OpenAlex adheres to open source principles and makes its index of data - such as scientific papers, authors and institutions - openly available. This is done through the web application via API and a full local database download snapshot for offline access.

Initiative for Open Citations (I4OC)

This initiative provides open access to scholarly citation data. The data should be machine-readable and independent of the format of the publication as well as re-usable.

Institute websites/author websites

For certain questions, websites of institutes and authors are used as a supplement.

Google Scholar

Google Scholar is a search engine for general literature research of scientific documents from Google. This includes both freely accessible documents as well as paid offers. Google Scholar analyzes and extracts the citations contained in the full texts and generates a citation analysis from them. However, the bibliometric indicators generated by Google Scholar are non-transparent in terms of content and inconsistently prepared (see an article by Mark Dingemanse on this topic).

Bibliometric indicators tempt to make comparisons that can lead to false conclusions due to different publication and citation cultures in different disciplines and research fields. For example, many natural science disciplines publish more than most humanities disciplines, particularly in the form of articles with numerous co-authors. In comparison with researchers who frequently publish in the form of monographs, the bibliometric indicators will therefore be different - although these do not provide any indication of the scientific quality of the publication or the performance of the individual authors. Moreover, the data material used for bibliometric analysis is not always complete, so the data basis influences the results.

This is particularly problematic as bibliometric indicators are increasingly used to assess performance and can make or break a researcher's scientific career. If, in the course of such an assessment, for example, the journal impact factor (JIF) of all journal articles from a person's publication list are added together to rank a person's scientific performance, this is an improper use of this bibliometric indicator, since the journal impact factor is a journal-related indicator and does not allow statements about individual publications (see above). Since such misinterpretations are not uncommon in performance assessment, bibliometric indicators are increasingly viewed skeptically by funding agencies, researchers, and initiatives.

Related Links

A basic knowledge of bibliometric methods and indicators allows scientists to better and more appropriately assess their research performance based on bibliometrics. In bibliometric analyses, it is also of interest to understand by whom and in what way the research results are received. By evaluating the citing channels (news, scientific articles, tweets, blogs, etc.), target groups can be defined more precisely. Projects such as Scite.ai are also looking at ways of evaluating whether a publication is discussed more approvingly or disapprovingly. Bibliometric analyses can also be helpful in identifying potential cooperation partners.

Even if bibliometric indicators can be influenced - for example by self-citations - the writing of a scientific publication should always be oriented towards the subject matter, the discipline, as well as good scientific practice and comprehensibility - and not towards bibliometric optimization.

For the visibility of one's own publications and to increase their citations, the clear identifiability of the publication medium, the concrete publication, the authors, and their institutional affiliation are a prerequisite. This requires a complete and correct indication of authorship and affiliation as well as persistent identifiers (e.g. an ORCID iD on the person level and a DOI on the article level). Equally important for findability is the use of standardized keywords and the choice of meaningful titles.

The calculation of bibliometric metrics is sometimes intransparent, the underlying data are not always known and the indicators can sometimes be manipulated. Therefore, there are increasing efforts within science to use the indicators applied in science assessment appropriately and responsibly, and to reduce the focus on quantitative indicators in favor of qualitative assessment – in short: publications should be assessed less on the basis of numbers and more on the basis of their content and quality, and in the context of their discipline. Many funding agencies, as well as initiatives from the research community itself and research-related service institutions, have now taken clear positions on this.

The position of the German Research Foundation (DFG)

Since 2019, it has been mandatory for applicants for DFG funding to acknowledge and implement the Guidelines for Safeguarding Good Scientific Practice.

In its position paper on research assessment, the DFG emphasizes the relevance of responsible handling of bibliometrically collected data. The paper explicitly calls for an assessment of the scientific content rather than the publication format, and points out that different subject and discipline-specific publication cultures should be taken into account. In addition, the evaluation of scientific performance should take into account whether cross-process safeguards are adhered to, target audiences are reached, or whether open access publication formats are used.

In 2022, the DFG adopted measures to support a shift in the culture of research assessment, as part of this, will require, among other things, the use of an adapted CV template for all funding programmes from March 2023 in order to strengthen qualitative evaluation criteria over quantitative metrics. For example, indicators such as the h-index will no longer be asked for and will no longer play a role in the potential approval of funding.

San Francisco Declaration on Research Assessment (DORA)

The 2012 Declaration on Research Assessment also emphasizes the application and communication of good scientific practices. One of the main demands is to move away from the journal impact factor as a quality indicator and to transparently disclose indicators used for collection. The declaration has been signed by more than 21,700 individuals and organizations in 158 countries to date (as of May 2022), including the DFG.

Leiden Manifesto

The ten principles postulated in the Leiden Manifesto are intended to guide the use of metrics and bibliometric measures according to a predefined objective. Collected data and quantitative metrics are intended to support, but not solely determine, judgment with respect to a research assessment. For example, the addition of qualitative criteria to quantitative metrics is required. In addition, data used should be transparently disclosed and the corresponding indicators regularly updated.

The Metric Tide

This 2015 review makes recommendations for different stakeholders. These are based on principles for responsible use of metrics, such as diversity and reflexivity.

Related Links