The Open Access journal Ethics in Science and Environmental Politics (ESEP) have devoted entire first issues of 2008 to the question of Impact Factors with their theme:
The use and misuse of bibliometric indices in evaluating scholarly performance
Here is the editors' foreword:
Bibliometric indices (based mainly upon citation counts), such as the h-index and the journal impact factor, are heavily relied upon in such assessments. There is a growing consensus, and a deep concern, that these indices — more-and-more often used as a replacement for the informed judgement of peers — are misunderstood and are, therefore, often misinterpreted and misused. The articles in this ESEP Theme Section present a range of perspectives on these issues. Alternative approaches, tools and metrics that will hopefully lead to a more balanced role for these instruments are presented.
Here is the TOC:
INTRODUCTION:
Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely/ Browman HI, Stergiou KI
Escape from the impact factor/ Campbell P
Makes the argument that the most effective and fair analysis of a person’s contribution derives from a direct assessment of individual papers, regardless of where they were published.
Lost in publication: how measurement harms science/ Lawrence PA
Changes to the way scientists are assessed are urgently needed, and suggestions are made in this article.
Hidden dangers of a ‘citation culture’/ Todd PA, Ladle RJ
A look at one of the areas where citation numbers can be inaccurate in terms of the manner in which papers are cited, indexed and searched for.
The siege of science/ Taylor M, Perakakis P, Trachana V
The academic as author, editor and/or reviewer, under intense competitive pressure, is forced to play the publishing game where such numbers rule, leading to frequent abuses of power. Here, Perakakis and Trachana review the current status of this siege, how it arose and how it is likely to evolve.
The economics of post-doc publishing/ Cheung WWL
This article explores how bibliometrics affect the publication strategy from a point of view of a post-doctoral fellow, with analogy and explanation from simple economic theories.
Chasing after the high impact./ Tsikliras AC
Although young scientists may not always be aware of the advantages and pitfalls of the impact factor system when it comes to the choice of which journal to submit to, journal ranking is among the selection criteria, following the journal’s general scope and rapid manuscript handling but preceding choice of a journal which allows authors to suggest potential referees, and open access journals.
Challenges for scientometric indicators: data demining, knowledge flows measurements and diversity issues/ Zitt M, Bassecoulard E
This article describes some of the challenges for bibliometric indicators (data ‘demining’,knowledge-flow measurements and diversity issues) underlying, among other applications, reliable evaluation procedures.
Google Scholar as a new source for citation analysis/ Harzing AWK, van der Wal R
Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data as well as 3 alternatives to the JIF to assess journal impact.
Re-interpretation of ‘influence weight’ as a citation-based Index of New Knowledge (INK)/Pauly D, Stergiou KI
The INK method for assessing the ‘influence weight’ of journals is re-interpreted as a potential citation-based indicator of the impact of scientific and other publications.
Benefitting from bibliometry/ Giske J
Within a department, the evaluation of performance may be one of several incentives for improving scientific quality and productivity. However, used alone, performance evaluation can lead to destructive competition and marginalization of potentially valuable staff members.
Using a balanced approach to bibliometrics: quantitative performance measures in the Australian Research Quality Framework/ Butler L Erratum
Within a department, the evaluation of performance may be one of several incentives for improving scientific quality and productivity. However, used alone, performance evaluation can lead to destructive competition and marginalization of potentially valuable staff members.
Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results/ Bornmann L, Mutz R, Neuhaus C, Daniel HD
Here Bornmann et al present standards of good practice for analyzing bibliometric data and presenting and interpreting the results.
Validating research performance metrics against peer rankings / Harnad S
In and of themselves, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power.
Wednesday, August 13, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment