259,976 research outputs found
Software Citation Implementation Challenges
The main output of the FORCE11 Software Citation working group
(https://www.force11.org/group/software-citation-working-group) was a paper on
software citation principles (https://doi.org/10.7717/peerj-cs.86) published in
September 2016. This paper laid out a set of six high-level principles for
software citation (importance, credit and attribution, unique identification,
persistence, accessibility, and specificity) and discussed how they could be
used to implement software citation in the scholarly community. In a series of
talks and other activities, we have promoted software citation using these
increasingly accepted principles. At the time the initial paper was published,
we also provided guidance and examples on how to make software citable, though
we now realize there are unresolved problems with that guidance. The purpose of
this document is to provide an explanation of current issues impacting
scholarly attribution of research software, organize updated implementation
guidance, and identify where best practices and solutions are still needed
Software citation principles
Software is a critical part of modern research and yet there is little support across the scholarly ecosystem for its acknowledgement and citation. Inspired by the activities of the FORCE11 working group focused on data citation, this document summarizes the recommendations of the FORCE11 Software Citation Working Group and its activities between June 2015 and April 2016. Based on a review of existing community practices, the goal of the working group was to produce a consolidated set of citation principles that may encourage broad adoption of a consistent policy for software citation across disciplines and venues. Our work is presented here as a set of software citation principles, a discussion of the motivations for developing the principles, reviews of existing community practice, and a discussion of the requirements these principles would place upon different stakeholders. Working examples and possible technical solutions for how these principles can be implemented will be discussed in a separate paper
Software Citation in Theory and Practice
In most fields, computational models and data analysis have become a
significant part of how research is performed, in addition to the more
traditional theory and experiment. Mathematics is no exception to this trend.
While the system of publication and credit for theory and experiment (journals
and books, often monographs) has developed and has become an expected part of
the culture, how research is shared and how candidates for hiring, promotion
are evaluated, software (and data) do not have the same history. A group
working as part of the FORCE11 community developed a set of principles for
software citation that fit software into the journal citation system, allow
software to be published and then cited, and there are now over 50,000 DOIs
that have been issued for software. However, some challenges remain, including:
promoting the idea of software citation to developers and users; collaborating
with publishers to ensure that systems collect and retain required metadata;
ensuring that the rest of the scholarly infrastructure, particularly indexing
sites, include software; working with communities so that software efforts
"count" and understanding how best to cite software that has not been
published
Research Software Sustainability and Citation
Software citation contributes to achieving software sustainability in two ways: It provides an impact metric to incentivize stakeholders to make software sustainable. It also provides references to software used in research, which can be reused and adapted to become sustainable. While software citation faces a host of technical and social challenges, community initiatives have defined the principles of software citation and are working on implementing solutions
What RSEs should know about software citation
As RSEs, we create and maintain the software that enables research across domains. While guidelines, policies, funding lines and good practices around research software are emerging, we still find it hard to accumulate academic credit for our software work. One solution to this issue is the establishment of a practice of software citation that bootstraps the existing citation system. But how does this work, in theory and in practice? In this talk, I introduce the citation problems that software faces and the basic principles of software citation. The talk will also outline what steps we can take as RSEs to get credit for our work, and enable better research (software) practice more generally
Making research software FAIR and citable
There is growing acknowledgment that software constitutes a valid research output. As such, it must be made available under the FAIR Principles for Research Software (FAIR4RS). Software publication with rich metadata allows researchers to provide their software in such a way, thus enabling better reproducibility of research results obtained using software, more credit for research software creators through citation, and improved sustainability. Currently, FAIR software publication is not common practice due to a lack of incentives, of clearly defined processes, and of publication support through tools and infrastructures. This talk presents the context of, and practical approaches to, automated FAIR4RS software publication: improving citation metadata for research software with the Citation File Format, and automating software publication with rich metadata using the HERMES workflow for continuous integration systems
Accounting for the Uncertainty in the Evaluation of Percentile Ranks
In a recent paper entitled "Inconsistencies of Recently Proposed Citation
Impact Indicators and how to Avoid Them," Schreiber (2012, at arXiv:1202.3861)
proposed (i) a method to assess tied ranks consistently and (ii) fractional
attribution to percentile ranks in the case of relatively small samples (e.g.,
for n < 100). Schreiber's solution to the problem of how to handle tied ranks
is convincing, in my opinion (cf. Pudovkin & Garfield, 2009). The fractional
attribution, however, is computationally intensive and cannot be done manually
for even moderately large batches of documents. Schreiber attributed scores
fractionally to the six percentile rank classes used in the Science and
Engineering Indicators of the U.S. National Science Board, and thus missed, in
my opinion, the point that fractional attribution at the level of hundred
percentiles-or equivalently quantiles as the continuous random variable-is only
a linear, and therefore much less complex problem. Given the quantile-values,
the non-linear attribution to the six classes or any other evaluation scheme is
then a question of aggregation. A new routine based on these principles
(including Schreiber's solution for tied ranks) is made available as software
for the assessment of documents retrieved from the Web of Science (at
http://www.leydesdorff.net/software/i3).Comment: Journal of the American Society for Information Science and
Technology (in press
Impactmessung, Transparenz & Open Science
Objective — The article discusses if it is sufficient to scale down Open Science to a free availability of objects, for example scientific publications (open access), or whether impact metrics that steer science and scientists must also be re-modeled under open science principles.
Methods — Well-known, citation-based impact metrics and new, alternative metrics are reviewed using the following criteria to assess whether they are open metrics: Scientific verifiability and modeling, transparency in their construction and methodology, consistency with the principles of open knowledge.
Results — Neither citation-based impact metrics nor alternative metrics can be labeled open metrics. They all lack scientific foundation, transparency and verifiability.
Conclusions — Since neither citation-based impact metrics nor alternative metrics can be considered open, it seems necessary to draw up a list of criteria for open metrics. This catalog includes aspects such as justifications and documentation for the selection of data sources, open availability of the data underlying the calculation of the impact scores, options to retrieve the data automatically via software interfaces, logical, scientific and documented justifications about the formula or parameters used to calculate impact values
- …