26,686 research outputs found

    Using Google Analytics Data to Expand Discovery and Use of Digital Archival Content

    Get PDF
    This article presents opportunities for the use of Google Analytics, a popular and freely available web analytics tool, to inform decision making for digital archivists managing online digital archives content. Emphasis is placed on the analysis of Google Analytics data to increase the visibility and discoverability of content. The article describes the use of Google Analytics to support fruitful digital outreach programs, to guide metadata creation for enhancing access, and to measure user demand to aid selection for digitization. Valuable reports, features, and tools in Google Analytics are identified and the use of these tools to gather meaningful data is explained

    The Open Research Web: A Preview of the Optimal and the Inevitable

    Get PDF
    The multiple online research impact metrics we are developing will allow the rich new database , the Research Web, to be navigated, analyzed, mined and evaluated in powerful new ways that were not even conceivable in the paper era – nor even in the online era, until the database and the tools became openly accessible for online use by all: by researchers, research institutions, research funders, teachers, students, and even by the general public that funds the research and for whose benefit it is being conducted: Which research is being used most? By whom? Which research is growing most quickly? In what direction? under whose influence? Which research is showing immediate short-term usefulness, which shows delayed, longer term usefulness, and which has sustained long-lasting impact? Which research and researchers are the most authoritative? Whose research is most using this authoritative research, and whose research is the authoritative research using? Which are the best pointers (“hubs”) to the authoritative research? Is there any way to predict what research will have later citation impact (based on its earlier download impact), so junior researchers can be given resources before their work has had a chance to make itself felt through citations? Can research trends and directions be predicted from the online database? Can text content be used to find and compare related research, for influence, overlap, direction? Can a layman, unfamiliar with the specialized content of a field, be guided to the most relevant and important work? These are just a sample of the new online-age questions that the Open Research Web will begin to answer

    Development of Computer Science Disciplines - A Social Network Analysis Approach

    Full text link
    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investigate the collaborative and citation behavior of journals/conferences by analyzing the properties of their co-authorship and citation subgraphs. The paper draws several important conclusions. First, conferences constitute social structures that shape the computer science knowledge. Second, computer science is becoming more interdisciplinary. Third, experts are the key success factor for sustainability of journals/conferences

    Usage History of Scientific Literature: Nature Metrics and Metrics of Nature Publications

    Get PDF
    In this study, we analyze the dynamic usage history of Nature publications over time using Nature metrics data. We conduct analysis from two perspectives. On the one hand, we examine how long it takes before the articles' downloads reach 50%/80% of the total; on the other hand, we compare the percentage of total downloads in 7 days, 30 days, and 100 days after publication. In general, papers are downloaded most frequently within a short time period right after their publication. And we find that compared with Non-Open Access papers, readers' attention on Open Access publications are more enduring. Based on the usage data of a newly published paper, regression analysis could predict the future expected total usage counts.Comment: 11 pages, 5 figures and 4 table

    Reports Of Conferences, Institutes, And Seminars

    Get PDF
    This quarter\u27s column offers coverage of multiple sessions from the 2016 Electronic Resources & Libraries (ER&L) Conference, held April 3–6, 2016, in Austin, Texas. Topics in serials acquisitions dominate the column, including reports on altmetrics, cost per use, demand-driven acquisitions, and scholarly communications and the use of subscriptions agents; ERMS, access, and knowledgebases are also featured

    Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2017)

    Full text link
    The large scale of scholarly publications poses a challenge for scholars in information seeking and sensemaking. Bibliometrics, information retrieval (IR), text mining and NLP techniques could help in these search and look-up activities, but are not yet widely used. This workshop is intended to stimulate IR researchers and digital library professionals to elaborate on new approaches in natural language processing, information retrieval, scientometrics, text mining and recommendation techniques that can advance the state-of-the-art in scholarly document understanding, analysis, and retrieval at scale. The BIRNDL workshop at SIGIR 2017 will incorporate an invited talk, paper sessions and the third edition of the Computational Linguistics (CL) Scientific Summarization Shared Task.Comment: 2 pages, workshop paper accepted at the SIGIR 201

    Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data

    Full text link
    In this study, we address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics - by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments) - using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts

    HELIN Data Analytics Task Force Final Report

    Get PDF
    The main task undertaken by the HELIN Data Analytics Task Force was to conduct a proof-of-concept usability test of HELIN OneSearch, which is the Consortium’s brand name for the Encore Duet discovery service. After the initial meeting in November 2014, the Task Force met 6 times in 2015 to plan and execute a prototype test. Staff members from EBSCO Information Services’ User Research group acted as usability test advisers and coordinators and attended all meetings, either onsite or via WebEx. Task Force members collaborated to come up with specific scenarios and personas which would best emphasize patron likes, dislikes and general understanding of OneSearch. Using a small sample of volunteer student test subjects from 3 different HELIN institutions, testing took place in mid-April. The results were analyzed by EBSCO and presented at the final meeting of the Task Force on April 28. Based on this limited testing, general findings were as follows: Students who don’t receive prior information instruction are generally not aware of OneSearch. Students who do know about OneSearch do not necessarily understand the difference between OneSearch and the HELIN Catalog. Most students still continue to do their research by searching database lists, LibGuides, the Journal A to Z list, and the HELIN catalog (although not necessarily in that order). When features and operation of OneSearch are explained to students, they recognize its usefulness (especially facets, which many referred to as “filters”). Lack of clarity on how to get directly to full text items causes frustration. A larger and more comprehensive usability test would be needed to draw out more specific conclusions. Secondary tasks undertaken by the Task Force included trials and reviews of 5 data analysis tools, as well as a review of EBSCO User Research, which is quantitative data on the use of OneSearch available directly from EBSCO. The remainder of this document is a detailed account of the proceedings of the HELIN Data Analytics Task Force

    Practices, policies, and problems in the management of learning data: A survey of libraries’ use of digital learning objects and the data they create

    Get PDF
    This study analyzed libraries’ management of the data generated by library digital learning objects (DLO’s) such as forms, surveys, quizzes, and tutorials. A substantial proportion of respondents reported having a policy relevant to learning data, typically a campus-level policy, but most did not. Other problems included a lack of access to library learning data, concerns about student privacy, inadequate granularity or standardization, and a lack of knowledge about colleagues’ practices. We propose more dialogue on learning data within libraries, between libraries and administrators, and across the library profession
    • …
    corecore