17 research outputs found

    TINJAUAN PUSTAKA SISTEMATIS PADA BASIS DATA PUSTAKA DIGITAL: TREN RISET, METODOLOGI, DAN COVERAGE FIELDS

    Get PDF
    The characterization of digital databases is needed to make it easier for academics to identify scientific literature properly and efficiently. This literature review intends to provide characterizations and descriptions related to research trends, methods and coverage fields studied in research related to the scientific database of scientific literature from around 2007 to the present (January 2019). By applying the specified inclusion and exclusion criteria, 54 relevant studies were chosen to be studied further. The systematic literature review method was applied in this study to analyze and identify previous studies related to this topic. Based on the selected primary literature there is an increasing trend of studies related to the scientific database of scientific literature. In addition, we can see that there are four of the most influential and influential publication journals related to this topic, namely the Journal of Informetrics, Journal of Cleaner Production, Asian Social Science and Journal of Academic Librarianship which are characterized by high levels of productivity issues related to the topics studied and SJR values rank is in the range Q1. Most of the studies were conducted on Scopus digital database (41%), Web of Sciences (WoS) 38% and Google Scholar (GS) 13% and the rest spread in other publication journals. The results of this study also identified that Scopus is a scientific database which has the most varied coverage fields compared to other digital database scientific literature. WoS is a digital database of scientific literature that has proven to have a paper with a higher impact factor than others. GS has the predicate digital database with the largest collection level

    Understanding research productivity in the realm of evaluative scientometrics

    Get PDF
    The combination of a variety of inputs (both tangible and intangible) enables the numerous outputs in varying degrees to realize the research productivity. To select appropriate metrics and translate into the practical situation through empirical design is a cumbersome task. A single indicator cannot work well in different situations, but selecting the 'most suitable' one from dozens of indicators is very confusing. Nevertheless, establishing benchmarks in research evaluation and implementing all-factor productivity is almost impossible. Understanding research productivity is, therefore, a quintessential need for performance evaluations in the realm of evaluative scientometrics. Many enterprises evaluate the research performance with little understanding of the dynamics of research and its counterparts. Evaluative scientometrics endorses the measures that emerge during the decision-making process through relevant metrics and indicators expressing the organizational dynamics. Evaluation processes governed by counting, weighting, normalizing, and then comparing seem trustworthy

    Reframing the Debate on Quality v/s Quantity in Research Assessment

    Get PDF
    The debate on quality versus quantity is still persistent for methodological considerations. These two approaches are highly contrasting in their epistemology and contrary to each other. A single composite indicator that reasonably senses both quality and quantity would be significant toward performance. This paper evaluates the potency of the combined metric for quality assessment of publications (QP) in India’s National Institutional Research Framework (NIRF) exercise in 2020. It also suggests a potential improvement in quality measurement to obtain the rankings more rationally with finer tunings

    Understanding Research Productivity in the Realm of Evaluative Scientometrics

    Get PDF
    Selecting appropriate metrics and translate into the practical situation through empirical design is a cumbersome task in measuring the research productivity. A single indicator cannot work well in different situations, but selecting the'most suitable'one from dozens of indicators is very confusing. Nevertheless, establishing benchmarks in research evaluation and implementing all-factor productivity is almost impossible. Understanding research productivity is, therefore, a quintessential need for performance evaluations in the realm of evaluative scientometrics. Evaluative scientometrics endorses the measures that emerge during the decision-making process through relevant metrics and indicators expressing the organizational dynamics. Evaluation processes governed by counting, weighting, normalizing, and then comparing seem trustworthy
    corecore