406,100 research outputs found

    Benchmarking citation measures among the Australian education professoriate

    Get PDF
    Individual researchers and the organisations for which they work are interested in comparative measures of research performance for a variety of purposes. Such comparisons are facilitated by quantifiable measures that are easily obtained and offer convenience and a sense of objectivity. One popular measure is the Journal Impact Factor based on citation rates but it is a measure intended for journals rather than individuals. Moreover, educational research publications are not well represented in the databases most widely used for calculation of citation measures leading to doubts about the usefulness of such measures in education. Newer measures and data sources offer alternatives that provide wider representation of education research. However, research has shown that citation rates vary according to discipline and valid comparisons depend upon the availability of discipline specific benchmarks. This study sought to provide such benchmarks for Australian educational researchers based on analysis of citation measures obtained for the Australian education professoriate

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Citation Statistics

    Full text link
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using ``simple and objective'' methods is increasingly prevalent today. The ``simple and objective'' methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.Comment: This paper commented in: [arXiv:0910.3532], [arXiv:0910.3537], [arXiv:0910.3543], [arXiv:0910.3546]. Rejoinder in [arXiv:0910.3548]. Published in at http://dx.doi.org/10.1214/09-STS285 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Counting publications and citations: Is more always better?

    Get PDF
    Is more always better? We address this question in the context of bibliometric indices that aim to assess the scientific impact of individual researchers by counting their number of highly cited publications. We propose a simple model in which the number of citations of a publication depends not only on the scientific impact of the publication but also on other 'random' factors. Our model indicates that more need not always be better. It turns out that the most influential researchers may have a systematically lower performance, in terms of highly cited publications, than some of their less influential colleagues. The model also suggests an improved way of counting highly cited publications

    Report : review of the literature : maintenance and rehabilitation costs for roads (Risk-based Analysis)

    Get PDF
    Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C

    An Integrated Impact Indicator (I3): A New Definition of "Impact" with Policy Relevance

    Full text link
    Allocation of research funding, as well as promotion and tenure decisions, are increasingly made using indicators and impact factors drawn from citations to published work. A debate among scientometricians about proper normalization of citation counts has resolved with the creation of an Integrated Impact Indicator (I3) that solves a number of problems found among previously used indicators. The I3 applies non-parametric statistics using percentiles, allowing highly-cited papers to be weighted more than less-cited ones. It further allows unbundling of venues (i.e., journals or databases) at the article level. Measures at the article level can be re-aggregated in terms of units of evaluation. At the venue level, the I3 creates a properly weighted alternative to the journal impact factor. I3 has the added advantage of enabling and quantifying classifications such as the six percentile rank classes used by the National Science Board's Science & Engineering Indicators.Comment: Research Evaluation (in press
    • …
    corecore