2,052 research outputs found

    #MPLP: a Comparison of Domain Novice and Expert User-generated Tags in a Minimally Processed Digital Archive

    Get PDF
    The high costs of creating and maintaining digital archives precluded many archives from providing users with digital content or increasing the amount of digitized materials. Studies have shown users increasingly demand immediate online access to archival materials with detailed descriptions (access points). The adoption of minimal processing to digital archives limits the access points at the folder or series level rather than the item-level description users\u27 desire. User-generated content such as tags, could supplement the minimally processed metadata, though users are reluctant to trust or use unmediated tags. This dissertation project explores the potential for controlling/mediating the supplemental metadata from user-generated tags through inclusion of only expert domain user-generated tags. The study was designed to answer three research questions with two parts each: 1(a) What are the similarities and differences between tags generated by expert and novice users in a minimally processed digital archive?, 1(b) Are there differences between expert and novice users\u27 opinions of the tagging experience and tag creation considerations?, 2(a) In what ways do tags generated by expert and/or novice users in a minimally processed collection correspond with metadata in a traditionally processed digital archive?, 2(b) Does user knowledge affect the proportion of tags matching unselected metadata in a minimally processed digital archive?, 3(a) In what ways do tags generated by expert and/or novice users in a minimally processed collection correspond with existing users\u27 search terms in a digital archive?, and 3(b) Does user knowledge affect the proportion of tags matching query terms in a minimally processed digital archive? The dissertation project was a mixed-methods, quasi-experimental design focused on tag generation within a sample minimally processed digital archive. The study used a sample collection of fifteen documents and fifteen photographs. Sixty participants divided into two groups (novices and experts) based on assessed prior knowledge of the sample collection\u27s domain generated tags for fifteen documents and fifteen photographs (a minimum of one tag per object). Participants completed a pre-questionnaire identifying prior knowledge, and use of social tagging and archives. Additionally, participants provided their opinions regarding factors associated with tagging including the tagging experience and considerations while creating tags through structured and open-ended questions in a post-questionnaire. An open-coding analysis of the created tags developed a coding scheme of six major categories and six subcategories. Application of the coding scheme categorized all generated tags. Additional descriptive statistics summarized the number of tags created by each domain group (expert, novice) for all objects and divided by format (photograph, document). T-tests and Chi-square tests explored the associations (and associative strengths) between domain knowledge and the number of tags created or types of tags created for all objects and divided by format. The subsequent analysis compared the tags with the metadata from the existing collection not displayed within the sample collection participants used. Descriptive statistics summarized the proportion of tags matching unselected metadata and Chi-square tests analyzed the findings for associations with domain knowledge. Finally, the author extracted existing users\u27 query terms from one month of server-log data and compared the generated-tags and unselected metadata. Descriptive statistics summarized the proportion of tags and unselected metadata matching query terms, and Chi-square tests analyzed the findings for associations with domain knowledge. Based on the findings, the author discussed the theoretical and practical implications of including social tags within a minimally processed digital archive

    Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier

    Full text link
    As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of 'grey data' about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This paper explores the competing values inherent in data stewardship and makes recommendations for practice, drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201

    Strategies for Sustaining Digital Libraries

    Get PDF
    This collection of essays is a report of early findings from pioneers who have worked to establishdigital libraries, not merely as experimental projects, but as ongoing services and collectionsintended to be sustained over time in ways consistent with the long-held practices of print-basedlibraries. Particularly during this period of extreme technological transition, it is imperative thatprograms across the nation – and indeed the world – actively share their innovations,experiences, and techniques in order to begin cultivating new isomorphic, or commonly held,practices. The collective sentiment of the field is that we must begin to transition from apunctuated, project-based mode of advancing innovative information services to an ongoingprogrammatic mode of sustaining digital libraries for the long haul

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Proceedings of the 12th International Conference on Digital Preservation

    Get PDF
    The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Tools you can trust? Co-design in community heritage work

    Get PDF
    This chapter will examine the role of co-design methods in relation to the recent Pararchive Project (http://pararchive.com) that took place between 2013 and 2015 at the University of Leeds. It will draw on the experiences of conducting the project and broader critical frames to examine the nature of collaborative working in the field of cultural heritage and storytelling. It will outline the lessons we have learned from the process and the ways in which the relationships between citizens and cultural institutions are central to working in the heritage sector. It seeks to advocate for the necessity of collaborative methods in the creation of cultural heritage tools that are trusted and adopted by communities

    NMC Horizon Report: 2017 Library Edition

    Get PDF
    What is on the five-year horizon for academic and research libraries? Which trends and technology developments will drive transformation? What are the critical challenges and how can we strategize solutions? These questions regarding technology adoption and educational change steered the discussions of 77 experts to produce the NMC Horizon Report: 2017 Library Edition, in partnership with the University of Applied Sciences (HTW) Chur, Technische Informationsbibliothek (TIB), ETH Library, and the Association of College & Research Libraries (ACRL). Six key trends, six significant challenges, and six developments in technology profiled in this report are poised to impact library strategies, operations, and services with regards to learning, creative inquiry, research, and information management. The three sections of this report constitute a reference and technology planning guide for librarians, library leaders, library staff, policymakers, and technologists

    E-Science as a Catalyst for Transformational Change in University Research Libraries: A Dissertation

    Get PDF
    Changes in how research is conducted, from the growth of e-science to the emergence of big data, have lead to new opportunities for librarians to become involved in the creation and management of research data, at the same time the duties and responsibilities of university libraries continue to evolve. This study examines those roles related to e-science while exploring the concept of transformational change and leadership issues in bringing about such a change. Using the framework established by Levy and Merry for first- and second-order change, four case studies of libraries whose institutions are members in the Association of Research Libraries (ARL) are developed. The case studies highlight why the libraries became involved in e-science, the role librarians are assuming related to data management education and policy, and the provision of e-science programs and services. Each case study documents the structural and programmatic changes that have occurred in a library to provide e-science services and programs, the future changes library leaders are working to implement, and the change management process used by managerial leaders to bringing about, and permanently embed those changes into the library culture. Themes such as vision, team leadership, the role of library administrators, skills of library staff, and fostering a learning organization are discussed in the context of e-science and leading transformational change. The transformational change included a change in culture, organization paradigm, and redefining the role of the university research library

    Planning for the Lifecycle Management and Long-Term Preservation of Research Data: A Federated Approach

    Get PDF
    Outcomes of the grant are archived here.The “data deluge” is a recent but increasingly well-understood phenomenon of scientific and social inquiry. Large-scale research instruments extend our observational power by many orders of magnitude but at the same time generate massive amounts of data. Researchers work feverishly to document and preserve changing or disappearing habitats, cultures, languages, and artifacts resulting in volumes of media in various formats. New software tools mine a growing universe of historical and modern texts and connect the dots in our semantic environment. Libraries, archives, and museums undertake digitization programs creating broad access to unique cultural heritage resources for research. Global-scale research collaborations with hundreds or thousands of participants, drive the creation of massive amounts of data, most of which cannot be recreated if lost. The University of Kansas (KU) Libraries in collaboration with two partners, the Greater Western Library Alliance (GWLA) and the Great Plains Network (GPN), received an IMLS National Leadership Grant designed to leverage collective strengths and create a proposal for a scalable and federated approach to the lifecycle management of research data based on the needs of GPN and GWLA member institutions.Institute for Museum and Library Services LG-51-12-0695-1
    • …
    corecore