12,365 research outputs found
Research assessment in the humanities: problems and challenges
Research assessment is going to play a new role in the governance of universities and research institutions. Evaluation of results is evolving from a simple tool for resource allocation towards policy design. In this respect "measuring" implies a different approach to quantitative aspects as well as to an estimation of qualitative criteria that are difficult to define. Bibliometrics became so popular, in spite of its limits, just offering a simple solution to complex problems. The theory behind it is not so robust but available results confirm this method as a reasonable trade off between costs and benefits.
Indeed there are some fields of science where quantitative indicators are very difficult to apply due to the lack of databases and data, in few words the credibility of existing information. Humanities and social sciences (HSS) need a coherent methodology to assess research outputs but current projects are not very convincing.
The possibility of creating a shared ranking of journals by the value of their contents at either institutional, national or European level is not enough as it is raising the same bias as in the hard sciences and it does not solve the problem of the various types of outputs and the different, much longer time of creation and dissemination.
The web (and web 2.0) represents a revolution in the communication of research results mainly in the HSS, and also their evaluation has to take into account this change. Furthermore, the increase of open access initiatives (green and gold road) offers a large quantity of transparent, verifiable data structured according to international standards that allow comparability beyond national limits and above all is independent from commercial agents.
The pilot scheme carried out at the university of Milan for the Faculty of Humanities demonstrated that it is possible to build quantitative, on average more robust indicators, that could provide a proxy of research production and productiivity even in the HSS
How FAIR can you get? Image Retrieval as a Use Case to calculate FAIR Metrics
A large number of services for research data management strive to adhere to
the FAIR guiding principles for scientific data management and stewardship. To
evaluate these services and to indicate possible improvements, use-case-centric
metrics are needed as an addendum to existing metric frameworks. The retrieval
of spatially and temporally annotated images can exemplify such a use case. The
prototypical implementation indicates that currently no research data
repository achieves the full score. Suggestions on how to increase the score
include automatic annotation based on the metadata inside the image file and
support for content negotiation to retrieve the images. These and other
insights can lead to an improvement of data integration workflows, resulting in
a better and more FAIR approach to manage research data.Comment: This is a preprint for a paper accepted for the 2018 IEEE conferenc
Experiences in deploying metadata analysis tools for institutional repositories
Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand.
The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools
Experiences in deploying metadata analysis tools for institutional repositories
Current institutional repository software provides few tools to help metadata librarians understand and analyze their collections. In this article, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand.
The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH; they were developed in response to feedback from repository administrators; and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool wheres the other is part of a wider access tool; one gives a holistic view of the metadata whereas the other looks for specific problems; one seeks patterns in the data values whereas the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing Web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching Web browser views from the analysis tool to the repository interface, and back. We summarize the findings from both tools' deployment into a checklist of requirements for metadata analysis tools
Assigning Creative Commons Licenses to Research Metadata: Issues and Cases
This paper discusses the problem of lack of clear licensing and transparency
of usage terms and conditions for research metadata. Making research data
connected, discoverable and reusable are the key enablers of the new data
revolution in research. We discuss how the lack of transparency hinders
discovery of research data and make it disconnected from the publication and
other trusted research outcomes. In addition, we discuss the application of
Creative Commons licenses for research metadata, and provide some examples of
the applicability of this approach to internationally known data
infrastructures.Comment: 9 pages. Submitted to the 29th International Conference on Legal
Knowledge and Information Systems (JURIX 2016), Nice (France) 14-16 December
201
Tracking citations and altmetrics for research data: Challenges and opportunities
Methods for determining research quality have long been debated but with little lasting agreement on standards, leading to the emergence of alternative metrics. Altmetrics are a useful supplement to traditional citation metrics, reflecting a variety of measurement points that give different perspectives on how a dataset is used and by whom. A positive development is the integration of a number of research datasets into the ISI Data Citation Index, making datasets searchable and linking them to published articles. Yet access to data resources and tracking the resulting altmetrics depend on specific qualities of the datasets and the systems where they are archived. Though research on altmetrics use is growing, the lack of standardization across datasets and system architecture undermines its generalizability. Without some standards, stakeholders' adoption of altmetrics will be limited
Protocols for Scholarly Communication
CERN, the European Organization for Nuclear Research, has operated an
institutional preprint repository for more than 10 years. The repository
contains over 850,000 records of which more than 450,000 are full-text OA
preprints, mostly in the field of particle physics, and it is integrated with
the library's holdings of books, conference proceedings, journals and other
grey literature. In order to encourage effective propagation and open access to
scholarly material, CERN is implementing a range of innovative library services
into its document repository: automatic keywording, reference extraction,
collaborative management tools and bibliometric tools. Some of these services,
such as user reviewing and automatic metadata extraction, could make up an
interesting testbed for future publishing solutions and certainly provide an
exciting environment for e-science possibilities. The future protocol for
scientific communication should naturally guide authors towards OA publication
and CERN wants to help reach a full open access publishing environment for the
particle physics community and the related sciences in the next few years.Comment: 8 pages, to appear in Library and Information Systems in Astronomy
Digital library economics : aspects and prospects
A review of the issues surrounding the economics of and economic justification for, digital libraries
Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation
Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected.
Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.
DataCite as a novel bibliometric source: Coverage, strengths and limitations
This paper explores the characteristics of DataCite to determine its
possibilities and potential as a new bibliometric data source to analyze the
scholarly production of open data. Open science and the increasing data sharing
requirements from governments, funding bodies, institutions and scientific
journals has led to a pressing demand for the development of data metrics. As a
very first step towards reliable data metrics, we need to better comprehend the
limitations and caveats of the information provided by sources of open data. In
this paper, we critically examine records downloaded from the DataCite's OAI
API and elaborate a series of recommendations regarding the use of this source
for bibliometric analyses of open data. We highlight issues related to metadata
incompleteness, lack of standardization, and ambiguous definitions of several
fields. Despite these limitations, we emphasize DataCite's value and potential
to become one of the main sources for data metrics development.Comment: Paper accepted for publication in Journal of Informetric
- …