2,354 research outputs found

    How reliable are systematic reviews in empirical software engineering?

    Get PDF
    BACKGROUND – the systematic review is becoming a more commonly employed research instrument in empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context. OBJECTIVE – the aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular we wish to investigate the consistency of process and the stability of outcomes. METHOD – we compare the results of two independent reviews under taken with a common research question. RESULTS – the two reviews find similar answers to the research question, although the means of arriving at those answers vary. CONCLUSIONS – in addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways. This provides evidence that, in this context at least, the systematic review is a robust research method

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Learning to Rank Academic Experts in the DBLP Dataset

    Full text link
    Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people's activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches.Comment: Expert Systems, 2013. arXiv admin note: text overlap with arXiv:1302.041

    Publishing on publishing: streams in the literature

    Full text link
    Purpose &ndash; The purpose of this paper is to propose and examine streams in the literature related to academic publishing, with a focus on works in marketing. The content of the works within each theme are then explored to identify what issues have been examined and their implications.Design/methodology/approach &ndash; The paper is a literature review, drawing on 30 years of research on academic publishing in marketing. The review is designed to cover the underlying issues examined, but is not designed to be comprehensive in terms of all the works exploring each stream of research.Findings &ndash; There are five main streams in the literature focusing on: rankings; theory and knowledge development; how to publish;, criticisms of publishing; and other issues. Within each stream, a number of sub-areas are explored. The works tend to be fragmented and there is generally limited in-depth qualitative research within streams exploring the underlying assumptions on which publishing is based.Research limitations/implications &ndash; The focus of the research is on the streams of works, rather than the findings within each stream and future research could explore each of these streams and sub-streams in more detail. Generally, the works appear to becoming increasingly sophisticated in terms of their analysis, which is only possible with the new technologies available. New metrics proposed in the literature that can be used to better understand publishing and additional qualitative research exploring some of the basic assumptions could also be explored.Practical implications &ndash; The research suggests that some streams with regard to academic publishing may have reached saturation and future publishing in these areas will need to be innovative in its approach and analysis, if these works are to be published.Originality/value &ndash; This paper is the first attempt to develop streams within the literature on academic publishing in marketing and thus draws together a diverse cross-section of works. It provides suggestions for directions for future research in the various streams.<br /

    Quantifying Success in Science: An Overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions
    corecore