Problems of bibliometric indicators
Bibliometric indicators tempt to make comparisons that can lead to false conclusions due to different publication and citation cultures in different disciplines and research fields. For example, many natural science disciplines publish more than most humanities disciplines, particularly in the form of articles with numerous co-authors. In comparison with researchers who frequently publish in the form of monographs, the bibliometric indicators will therefore be different - although these do not provide any indication of the scientific quality of the publication or the performance of the individual authors. Moreover, the data material used for bibliometric analysis is not always complete, so the data basis influences the results.
This is particularly problematic as bibliometric indicators are increasingly used to assess performance and can make or break a researcher's scientific career. If, in the course of such an assessment, for example, the journal impact factor (JIF) of all journal articles from a person's publication list are added together to rank a person's scientific performance, this is an improper use of this bibliometric indicator, since the journal impact factor is a journal-related indicator and does not allow statements about individual publications (see above). Since such misinterpretations are not uncommon in performance assessment, bibliometric indicators are increasingly viewed skeptically by funding agencies, researchers, and initiatives.