40 research outputs found

    Open Access Scientometrics and the UK Research Assessment Exercise

    No full text
    Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Assessment Exercise (RAE) -- together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, etc. can all be tested and validated jointly, discipline by discipline, against their RAE panel rankings in the forthcoming parallel panel-based and metric RAE in 2008. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster

    Gold Open Access Publishing Must Not Be Allowed to Retard the Progress of Green Open Access Self-Archiving

    No full text
    Universal Open Access (OA) is fully within the reach of the global research community: Research institutions and funders need merely mandate (green) OA self-archiving of the final, refereed drafts of all journal articles immediately upon acceptance for publication. The money to pay for gold OA publishing will only become available if universal green OA eventually makes subscriptions unsustainable. Paying for gold OA pre-emptively today, without first having mandated green OA not only squanders scarce money, but it delays the attainment of universal OA

    Research assessment in the humanities: problems and challenges

    Get PDF
    Research assessment is going to play a new role in the governance of universities and research institutions. Evaluation of results is evolving from a simple tool for resource allocation towards policy design. In this respect "measuring" implies a different approach to quantitative aspects as well as to an estimation of qualitative criteria that are difficult to define. Bibliometrics became so popular, in spite of its limits, just offering a simple solution to complex problems. The theory behind it is not so robust but available results confirm this method as a reasonable trade off between costs and benefits. Indeed there are some fields of science where quantitative indicators are very difficult to apply due to the lack of databases and data, in few words the credibility of existing information. Humanities and social sciences (HSS) need a coherent methodology to assess research outputs but current projects are not very convincing. The possibility of creating a shared ranking of journals by the value of their contents at either institutional, national or European level is not enough as it is raising the same bias as in the hard sciences and it does not solve the problem of the various types of outputs and the different, much longer time of creation and dissemination. The web (and web 2.0) represents a revolution in the communication of research results mainly in the HSS, and also their evaluation has to take into account this change. Furthermore, the increase of open access initiatives (green and gold road) offers a large quantity of transparent, verifiable data structured according to international standards that allow comparability beyond national limits and above all is independent from commercial agents. The pilot scheme carried out at the university of Milan for the Faculty of Humanities demonstrated that it is possible to build quantitative, on average more robust indicators, that could provide a proxy of research production and productiivity even in the HSS

    Éditorial

    Get PDF

    Mandates and Metrics:How Open Repositories Enable Universities to Manage, Measure and Maximise their Research Assets

    No full text
    PPT presentation prepared for use in informing universities on open access self-archiving policy-making

    Research Assessment Using Bibliometric and Scientometric Measures: The Good, the Bad, and the Ugly

    Get PDF
    Presentation at the 3-rd International Conference "Scientific communication at the Digital Age" (March, 10-12, 2015, NaUKMA

    Soft peer review: social software and distributed scientific evaluation

    Get PDF
    The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system

    Publishing a Scorecard for Evaluating the Use of Open-Access Journals Using Linked Data Technologies

    Get PDF
    Open access journals collect, preserve and publish scientific information in digital form, but it is still difficult not only for users but also for digital libraries to evaluate the usage and impact of this kind of publications. This problem can be tackled by introducing Key Performance Indicators (KPIs), allowing us to objectively measure the performance of the journals related to the objectives pursued. In addition, Linked Data technologies constitute an opportunity to enrich the information provided by KPIs, connecting them to relevant datasets across the web. This paper describes a process to develop and publish a scorecard on the semantic web based on the ISO 2789:2013 standard using Linked Data technologies in such a way that it can be linked to related datasets. Furthermore, methodological guidelines are presented with activities. The proposed process was applied to the open journal system of a university, including the definition of the KPIs linked to the institutional strategies, the extraction, cleaning and loading of data from the data sources into a data mart, the transforming of data into RDF (Resource Description Framework), and the publication of data by means of a SPARQL endpoint using the OpenLink Virtuoso application. Additionally, the RDF data cube vocabulary has been used to publish the multidimensional data on the web. The visualization was made using CubeViz a faceted browser to present the KPIs in interactive charts.This work has been partially supported by the Prometeo Project by SENESCYT, Ecuadorian Government

    An Approach to Publish Statistics from Open-Access Journals Using Linked Data Technologies

    Get PDF
    Semantic Web encourages digital libraries which include open access journals, to collect, link and share their data across the web in order to ease its processing by machines and humans to get better queries and results. Linked Data technologies enable connecting structured data across the web using the principles and recommendations set out by Tim Berners-Lee in 2006. Several universities develop knowledge, through scholarship and research, under open access policies and use several ways to disseminate information. Open access journals collect, preserve and publish scientific information in digital form using a peer review process. The evaluation of the usage of this kind of publications needs to be expressed in statistics and linked to external resources to give better information about the resources and their relationships. The statistics expressed in a data mart facilitate queries about the history of journals usage by several criteria. This data linked to another datasets gives more information such as: the topics in the research, the origin of the authors, the relation to the national plans, and the relations about the study curriculums. This paper reports a process for publishing an open access journal data mart on the Web using Linked Data technologies in such a way that it can be linked to related datasets. Furthermore, methodological guidelines are presented with related activities. The proposed process was applied extracting statistical data from a university open journal system and publishing it in a SPARQL endpoint using the open source edition of the software OpenLink Virtuoso. In this process the use of open standards facilitates the creation, development and exploitation of knowledge. The RDF Data Cube vocabulary has been used as a model for publishing the multidimensional data on the Web. The visualization was made using CubeViz a faceted browser filtering observations to be presented interactively in charts. The proposed process help to publish statistical datasets in an easy way.This work has been partially supported by the Prometeo Project by SENESCYT, Ecuadorian Government

    An Approach to Publish Scientific Data of Open-Access Journals Using Linked Data Technologies

    Get PDF
    Semantic Web encourages digital libraries, including open access journals, to collect, link and share their data across the Web in order to ease its processing by machines and humans to get better queries and results. Linked Data technologies enable connecting related data across the Web using the principles and recommendations set out by Tim Berners-Lee in 2006. Several universities develop knowledge through scholarship and research with open access policies for the generated knowledge, using several ways to disseminate information. Open access journals collect, preserve and publish scientific information in digital form related to a particular academic discipline in a peer review process having a big potential for exchanging and spreading their data linked to external resources using Linked Data technologies. Linked Data can increase those benefits with better queries about the resources and their relationships. This paper reports a process for publishing scientific data on the Web using Linked Data technologies. Furthermore, methodological guidelines are presented with related activities. The proposed process was applied extracting data from a university Open Journal System and publishing in a SPARQL endpoint using the open source edition of OpenLink Virtuoso. In this process, the use of open standards facilitates the creation, development and exploitation of knowledge.This research has been partially supported by the Prometeo project by SENESCYT, Ecuadorian Government and by CEDIA (Consorcio Ecuatoriano para el Desarrollo de Internet Avanzado) supporting the project: “Platform for publishing library bibliographic resources using Linked Data technologies”
    corecore