115 research outputs found
The tracking of paper usage data versus citation counts for Library Philosophy and Practice
With academic journals widely published and distributed online, the paper usage data has been a focus not only by publishers, but also by many researchers, especially librarians. The main reason for this motivation is that this data is considered as a measure of interest in published research and that possible references to the paper in the future have been used as the first predictive tool. The aim of this study is to examine whether there is a relationship between paper usage data and citation counts for Library Philosophy and Practice between 2005 and 2020, taking into account the number of citations that papers cited ten and over in the Scopus database have received in the Google Scholar (GS) database at the same time. As a result of the analysis, the correlations between download and citation counts from the Scopus database and the GS database were determined to be statistically significant positive (rS=0.261 and rP=0.310; rS=0.636 and rP=0.356; p\u3c0.01), respectively. Similarly, there was a positive correlation between citations in the Scopus database and citations in the GS database (rS=0.581 and rP=0.812; p\u3c0.01). In the meanwhile, taking into consideration the papers\u27 single-author and multi-author statuses; it was observed that single-author papers received more citations on average in the Scopus and GS databases, but the difference between groups was not statistically significant (p\u3e0.05). The findings were compared with the studies in the literature and evaluations were made about what can be done for future studies
Social media metrics for new research evaluation
This chapter approaches, both from a theoretical and practical perspective,
the most important principles and conceptual frameworks that can be considered
in the application of social media metrics for scientific evaluation. We
propose conceptually valid uses for social media metrics in research
evaluation. The chapter discusses frameworks and uses of these metrics as well
as principles and recommendations for the consideration and application of
current (and potentially new) metrics in research evaluation.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M.
(2018). Springer Handbook of Science and Technology Indicators. Springe
Scholarly use of social media and altmetrics : a review of the literature
Social media has become integrated into the fabric of the scholarly communication system in fundamental
ways: principally through scholarly use of social media platforms and the promotion of new indicators on
the basis of interactions with these platforms. Research and scholarship in this area has accelerated since
the coining and subsequent advocacy for altmetricsâthat is, research indicators based on social media
activity. This review provides an extensive account of the state-of-the art in both scholarly use of social
media and altmetrics. The review consists of two main parts: the first examines the use of social media in
academia, examining the various functions these platforms have in the scholarly communication process
and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the
various interpretations of altmetrics, data collection and methodological limitations, and differences
according to platform. The review ends with a critical discussion of the implications of this transformation
in the scholarly communication system
Report on the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2)
This technical report records and discusses the Second Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE2). The report includes a description of the alternative, experimental submission and review process, two workshop keynote presentations, a series of lightning talks, a discussion on sustainability, and five discussions from the topic areas of exploring sustainability; software development experiences; credit & incentives; reproducibility & reuse & sharing; and code testing & code review. For each topic, the report includes a list of tangible actions that were proposed and that would lead to potential change. The workshop recognized that reliance on scientific software is pervasive in all areas of world-leading research today. The workshop participants then proceeded to explore different perspectives on the concept of sustainability. Key enablers and barriers of sustainable scientific software were identified from their experiences. In addition, recommendations with new requirements such as software credit files and software prize frameworks were outlined for improving practices in sustainable software engineering. There was also broad consensus that formal training in software development or engineering was rare among the practitioners. Significant strides need to be made in building a sense of community via training in software and technical practices, on increasing their size and scope, and on better integrating them directly into graduate education programs. Finally, journals can define and publish policies to improve reproducibility, whereas reviewers can insist that authors provide sufficient information and access to data and software to allow them reproduce the results in the paper. Hence a list of criteria is compiled for journals to provide to reviewers so as to make it easier to review software submitted for publication as a âSoftware Paper.
Recommended from our members
Mining Scholarly Publications for Research Evaluation
Scientific research can lead to breakthroughs that revolutionise society by solving long-standing problems. However, investment of public funds into research requires the ability to clearly demonstrate beneficial returns, accountability, and good management. At the same time, with the amount of scholarly literature rapidly expanding, recognising key research that presents the most important contributions to science is becoming increasingly difficult and time-consuming. This creates a need for effective and appropriate research evaluation methods. However, the question of how to evaluate the quality of research outcomes is very difficult to answer and despite decades of research, there is still no standard solution to this problem.
Given this growing need for research evaluation, it is increasingly important to understand how research should be evaluated, and whether the existing methods meet this need. However, the current solutions, which are predominantly based on counting the number of interactions in the scholarly communication network, are insufficient for a number of reasons. In particular, they struggle in capturing many aspects of the academic culture and often significantly lag behind current developments.
This work focuses on the evaluation of research publications and aims at creating new methods which utilise publication content. It studies the concept of research publication quality, methods assessing the performance of new research publication evaluation methods, analyses and extends the existing methods, and, most importantly, presents a new class of metrics which are based on publication manuscripts. By bridging the fields of research evaluation and text- and data-mining, this work provides tools for analysing the outcomes of research, and for relieving information overload in scholarly publishing
Social media metrics for new research evaluation
Merit, Expertise and Measuremen
Congress UPV Proceedings of the 21ST International Conference on Science and Technology Indicators
This is the book of proceedings of the 21st Science and Technology Indicators Conference that took place
in ValĂšncia (Spain) from 14th to 16th of September 2016.
The conference theme for this year, âPeripheries, frontiers and beyondâ aimed to study the development and
use of Science, Technology and Innovation indicators in spaces that have not been the focus of current indicator
development, for example, in the Global South, or the Social Sciences and Humanities.
The exploration to the margins and beyond proposed by the theme has brought to the STI Conference an
interesting array of new contributors from a variety of fields and geographies.
This yearâs conference had a record 382 registered participants from 40 different countries, including 23
European, 9 American, 4 Asia-Pacific, 4 Africa and Near East. About 26% of participants came from outside
of Europe.
There were also many participants (17%) from organisations outside academia including governments (8%),
businesses (5%), foundations (2%) and international organisations (2%). This is particularly important in a
field that is practice-oriented.
The chapters of the proceedings attest to the breadth of issues discussed. Infrastructure, benchmarking
and use of innovation indicators, societal impact and mission oriented-research, mobility and careers, social
sciences and the humanities, participation and culture, gender, and altmetrics, among others.
We hope that the diversity of this Conference has fostered productive dialogues and synergistic ideas and
made a contribution, small as it may be, to the development and use of indicators that, being more inclusive,
will foster a more inclusive and fair world
- âŠ