2,478,020 research outputs found

    Sometimes the impact factor outshines the H index

    Get PDF
    Journal impact factor (which reflects a particular journal's quality) and H index (which reflects the number and quality of an author's publications) are two measures of research quality. It has been argued that the H index outperforms the impact factor for evaluation purposes. Using articles first-authored or last-authored by board members of Retrovirology, we show here that the reverse is true when the future success of an article is to be predicted. The H index proved unsuitable for this specific task because, surprisingly, an article's odds of becoming a 'hit' appear independent of the pre-eminence of its author. We discuss implications for the peer-review process

    Evaluating Research Activity:Impact Factor vs. Research Factor

    Get PDF
    The Impact Factor (IF) “has moved ... from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers, and even the institution they work in” ([2], p. 1). However, the use of this index for evaluating individual scientists is dubious. The present work compares the ranking of research units generated by the Research Factor (RF) index with that associated with the popular IF. The former, originally introduced in [38], reflects article and book publications and a host of other activities categorized as coordination activities (e.g., conference organization, research group coordination), dissemination activities (e.g., conference and seminar presentations, participation in research group), editorial activities (e.g., journal editor, associate editor, referee) and functional activities (e.g., Head of Department). The main conclusion is that by replacing the IF with the RF in hiring, tenure decisions and awarding of grants would greatly increase the number of topics investigated and the number and quality of long run projects.scientific research assessment, Impact Factor, bibliometric indices, feasible Research Factor

    Are methodological quality and completeness of reporting associated with citation-based measures of publication impact? A secondary analysis of a systematic review of dementia biomarker studies

    Get PDF
    Objective: To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies. Methods: We assessed dementia biomarker studies included in a previous systematic review in terms of methodological and reporting quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and Standards for Reporting of Diagnostic Accuracy (STARD), respectively. We extracted additional study and journal-related data from each publication to account for factors shown to be associated with impact in previous research. We explored associations between potential determinants and measures of publication impact in univariable and stepwise multivariable linear regression analyses. Outcome measures: We aimed to collect data on four measures of publication impact: two traditional measures—average number of citations per year and 5-year impact factor of the publishing journal and two alternative measures—the Altmetric Attention Score and counts of electronic downloads. Results: The systematic review included 142 studies. Due to limited data, Altmetric Attention Scores and electronic downloads were excluded from the analysis, leaving traditional metrics as the only analysed outcome measures. We found no relationship between QUADAS and traditional metrics. Citation rates were independently associated with 5-year journal impact factor (β=0.42; p<0.001), journal subject area (β=0.39; p<0.001), number of years since publication (β=-0.29; p<0.001) and STARD (β=0.13; p<0.05). Independent determinants of 5-year journal impact factor were citation rates (β=0.45; p<0.001), statement on conflict of interest (β=0.22; p<0.01) and baseline sample size (β=0.15; p<0.05). Conclusions: Citation rates and 5-year journal impact factor appear to measure different dimensions of impact. Citation rates were weakly associated with completeness of reporting, while neither traditional metric was related to methodological rigour. Our results suggest that high publication usage and journal outlet is not a guarantee of quality and readers should critically appraise all papers regardless of presumed impact

    Quality and validity of large animal experiments in stroke : a systematic review

    Get PDF
    An important factor for successful translational stroke research is study quality. Low-quality studies are at risk of biased results and effect overestimation, as has been intensely discussed for small animal stroke research. However, little is known about the methodological rigor and quality in large animal stroke models, which are becoming more frequently used in the field. Based on research in two databases, this systematic review surveys and analyses the methodological quality in large animal stroke research. Quality analysis was based on the Stroke Therapy Academic Industry Roundtable and the Animals in Research: Reporting In Vivo Experiments guidelines. Our analysis revealed that large animal models are utilized with similar shortcomings as small animal models. Moreover, translational benefits of large animal models may be limited due to lacking implementation of important quality criteria such as randomization, allocation concealment, and blinded assessment of outcome. On the other hand, an increase of study quality over time and a positive correlation between study quality and journal impact factor were identified. Based on the obtained findings, we derive recommendations for optimal study planning, conducting, and data analysis/reporting when using large animal stroke models to fully benefit from the translational advantages offered by these models

    What do Experts Know About Ranking Journal Quality? A Comparison with ISI Research Impact in Finance

    Get PDF
    Experts possess knowledge and information that are not publicly available. The paper is concerned with the ranking of academic journal quality and research impact using a survey of experts from a national project on ranking academic finance journals. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business - Finance category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, PI-BETA (Papers Ignored - By Even The Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. Harmonic mean rankings of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A simple regression model is used to predict expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal.IFI;PI-BETA;STAR;article influence;eigenfactor;h-index;C3PO;impact factor;research assessment measures;C81;C83;C18;expert scores;journal quality

    What do Experts Know About Ranking Journal Quality? A Comparison with ISI Research Impact in Finance

    Get PDF
    Experts possess knowledge and information that are not publicly available. The paper is concerned with the ranking of academic journal quality and research impact using a survey of experts from a national project on ranking academic finance journals. A comparison is made with publicly available bibliometric data, namely the Thomson Reuters ISI Web of Science citations database (hereafter ISI) for the Business - Finance category. The paper analyses the leading international journals in Finance using expert scores and quantifiable Research Assessment Measures (RAMs), and highlights the similarities and differences in the expert scores and alternative RAMs, where the RAMs are based on alternative transformations of citations taken from the ISI database. Alternative RAMs may be calculated annually or updated daily to answer the perennial questions as to When, Where and How (frequently) published papers are cited (see Chang et al. (2011a, b, c)). The RAMs include the most widely used RAM, namely the classic 2-year impact factor including journal self citations (2YIF), 2-year impact factor excluding journal self citations (2YIF*), 5-year impact factor including journal self citations (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, PIBETA (Papers Ignored - By Even The Authors), 2-year Self-citation Threshold Approval Ratings (2Y-STAR), Historical Self-citation Threshold Approval Ratings (H-STAR), Impact Factor Inflation (IFI), and Cited Article Influence (CAI). As data are not available for 5YIF, Article Influence and CAI for 13 of the leading 34 journals considered, 10 RAMs are analysed for 21 highly-cited journals in Finance. Harmonic mean rankings of the 10 RAMs for the 34 highly-cited journals are also presented. It is shown that emphasizing the 2-year impact factor of a journal, which partly answers the question as to When published papers are cited, to the exclusion of other informative RAMs, which answer Where and How (frequently) published papers are cited, can lead to a distorted evaluation of journal impact and influence relative to the Harmonic Mean rankings. A simple regression model is used to predict expert scores on the basis of RAMs that capture journal impact, journal policy, the number of high quality papers, and quantitative information about a journal.Expert scores; Journal quality; Research assessment measures; Impact factor; IFI; C3PO; PI-BETA; STAR; Eigenfactor; Article Influence; h-index

    Does Criticism Overcome the Praises of Journal Impact Factor?

    Get PDF
    Journal impact factor (IF) as a gauge of influence and impact of a particular journal comparing with other journals in the same area of research, reports the mean number of citations to the published articles in particular journal. Although, IF attracts more attention and being used more frequently than other measures, it has been subjected to criticisms, which overcome the advantages of IF. Critically, extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles. Therefore, it is the time of the timeliness and importance of a new invention of journal ranking techniques beyond the journal impact factor

    Quantifying the impact and relevance of scientific research

    Get PDF
    Qualitative and quantitative methods are being developed to measure the impacts of research on society, but they suffer from serious drawbacks associated with linking a piece of research to its subsequent impacts. We have developed a method to derive impact scores for individual research publications according to their contribution to answering questions of quantified importance to end users of research. To demonstrate the approach, here we evaluate the impacts of research into means of conserving wild bee populations in the UK. For published papers, there is a weak positive correlation between our impact score and the impact factor of the journal. The process identifies publications that provide high quality evidence relating to issues of strong concern. It can also be used to set future research agendas

    Article length bias in journal rankings

    Get PDF
    The quality of publications, approximated by the containing journal's quality indicator, is often the basis for hire and promotion in academic and research positions. Over the years a handful of ranking methods have been proposed. Discussing the most prominent methods we show that they are inherently biased against journals publishing short papers.quality ranking, paper length, impact factor, invariant method, LP method
    corecore