20,865 research outputs found

    The Open Research Web: A Preview of the Optimal and the Inevitable

    Get PDF
    The multiple online research impact metrics we are developing will allow the rich new database , the Research Web, to be navigated, analyzed, mined and evaluated in powerful new ways that were not even conceivable in the paper era – nor even in the online era, until the database and the tools became openly accessible for online use by all: by researchers, research institutions, research funders, teachers, students, and even by the general public that funds the research and for whose benefit it is being conducted: Which research is being used most? By whom? Which research is growing most quickly? In what direction? under whose influence? Which research is showing immediate short-term usefulness, which shows delayed, longer term usefulness, and which has sustained long-lasting impact? Which research and researchers are the most authoritative? Whose research is most using this authoritative research, and whose research is the authoritative research using? Which are the best pointers (“hubs”) to the authoritative research? Is there any way to predict what research will have later citation impact (based on its earlier download impact), so junior researchers can be given resources before their work has had a chance to make itself felt through citations? Can research trends and directions be predicted from the online database? Can text content be used to find and compare related research, for influence, overlap, direction? Can a layman, unfamiliar with the specialized content of a field, be guided to the most relevant and important work? These are just a sample of the new online-age questions that the Open Research Web will begin to answer

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    DiSCmap : digitisation of special collections mapping, assessment, prioritisation. Final project report

    Get PDF
    Traditionally, digitisation has been led by supply rather than demand. While end users are seen as a priority they are not directly consulted about which collections they would like to have made available digitally or why. This can be seen in a wide range of policy documents throughout the cultural heritage sector, where users are positioned as central but where their preferences are assumed rather than solicited. Post-digitisation consultation with end users isequally rare. How are we to know that digitisation is serving the needs of the Higher Education community and is sustainable in the long-term? The 'Digitisation in Special Collections: mapping, assessment and prioritisation' (DiSCmap) project, funded by the Joint Information Systems Committee (JISC) and the Research Information Network (RIN), aimed to:- Identify priority collections for potential digitisation housed within UK Higher Education's libraries, archives and museums as well as faculties and departments.- Assess users' needs and demand for Special Collections to be digitised across all disciplines.- Produce a synthesis of available knowledge about users' needs with regard to usability and format of digitised resources.- Provide recommendations for a strategic approach to digitisation within the wider context and activity of leading players both in the public and commercial sector.The project was carried out jointly by the Centre for Digital Library Research (CDLR) and the Centre for Research in Library and Information Management (CERLIM) and has taken a collaborative approach to the creation of a user-driven digitisation prioritisation framework, encouraging participation and collective engagement between communities.Between September 2008 and March 2009 the DiSCmap project team asked over 1,000 users, including intermediaries (vocational users who take care of collections) and end users (university teachers, researchers and students) a variety of questions about which physical and digital Special Collections they make use of and what criteria they feel must be considered when selecting materials for digitisation. This was achieved through workshops, interviews and two online questionnaires. Although the data gathered from these activities has the limitation of reflecting only a partial view on priorities for digitisation - the view expressed by those institutions who volunteered to take part in the study - DiSCmap was able to develop:- a 'long list' of 945 collections nominated for digitisation both by intermediaries andend-users from 70 HE institutions (see p. 21);- a framework of user-driven prioritisation criteria which could be used to inform current and future digitisation priorities; (see p. 45)- a set of 'short lists' of collections which exemplify the application of user-driven criteria from the prioritisation framework to the long list (see Appendix X):o Collections nominated more than once by various groups of users.o Collections related to a specific policy framework, eg HEFCE's strategically important and vulnerable subjects for Mathematics, Chemistry and Physics.o Collections on specific thematic clusters.o Collections with highest number of reasons for digitisation

    Harvesting for disseminating, open archives and role of academic libraries

    Get PDF
    The Scholarly communication system is in a critical stage, due to a number of factors.The Open Access movement is perhaps the most interesting response that the scientific community has tried to give to this problem. The paper examines strengths and weaknesses of the Open Access strategy in general and, more specifically, of the Open Archives Initiative, discussing experiences, criticisms and barriers. All authors that have faced the problems of implementing an OAI compliant e-print server agree that technical and practical problems are not the most difficult to overcome and that the real problem is the change in cultural attitude required. In this scenario the university library is possibly the standard bearer for the advent and implementation of e-prints archives and Open Archives services. To ensure the successful implementation of this service the Library has a number of distinct roles to play

    A-posteriori provenance-enabled linking of publications and datasets via crowdsourcing

    No full text
    This paper aims to share with the digital library community different opportunities to leverage crowdsourcing for a-posteriori capturing of dataset citation graphs. We describe a practical approach, which exploits one possible crowdsourcing technique to collect these graphs from domain experts and proposes their publication as Linked Data using the W3C PROV standard. Based on our findings from a study we ran during the USEWOD 2014 workshop, we propose a semi-automatic approach that generates metadata by leveraging information extraction as an additional step to crowdsourcing, to generate high-quality data citation graphs. Furthermore, we consider the design implications on our crowdsourcing approach when non-expert participants are involved in the process<br/

    Searching Data: A Review of Observational Data Retrieval Practices in Selected Disciplines

    Get PDF
    A cross-disciplinary examination of the user behaviours involved in seeking and evaluating data is surprisingly absent from the research data discussion. This review explores the data retrieval literature to identify commonalities in how users search for and evaluate observational research data. Two analytical frameworks rooted in information retrieval and science technology studies are used to identify key similarities in practices as a first step toward developing a model describing data retrieval

    A Method to Screen, Assess, and Prepare Open Data for Use

    Get PDF
    Open data's value-creating capabilities and innovation potential are widely recognized, resulting in a notable increase in the number of published open data sources. A crucial challenge for companies intending to leverage open data is to identify suitable open datasets that support specific business scenarios and prepare these datasets for use. Researchers have developed several open data assessment techniques, but those are restricted in scope, do not consider the use context, and are not embedded in the complete set of activities required for open data consumption in enterprises. Therefore, our research aims to develop prescriptive knowledge in the form of a meaningful method to screen, assess, and prepare open data for use in an enterprise setting. Our findings complement existing open data assessment techniques by providing methodological guidance to prepare open data of uncertain quality for use in a value-adding and demand-oriented manner, enabled by knowledge graphs and linked data concepts. From an academic perspective, our research conceptualizes open data preparation as a purposeful and value-creating process
    corecore