3,660 research outputs found

    Linking Text and Image with SVG

    Get PDF
    Annotation and linking (or referring) have been described as "scholarly primitives", basic methods used in scholarly research and publication of all kinds. The online publication of manuscript images is one basic use case where the need for linking and annotation is very clear. High resolution images are of great use to scholars and transcriptions of texts provide for search and browsing, so the ideal method for the digital publication of manuscript works is the presentation of page images plus a transcription of the text therein. This has become a standard method, but leaves open the questions of how deeply the linkages can be done and how best to handle the annotation of sections of the image. This paper presents a new method (named img2xml) for connecting text and image using an XML-based tracing of the text on the page image. The tracing method was developed as part of a series of experiments in text and image linking beginning in the summer of 2008 and will continue under a grant funded by the National Endowment for the Humanities. It employs Scalable Vector Graphics (SVG) to represent the text in an image of a manuscript page in a referenceable form and enables linking and annotation of the page image in a variety of ways. The paper goes on to discuss the scholarly requirements for tools that will be developed around the tracing method, and explores some of the issues raised by the img2xml method

    "Seed+Expand": A validated methodology for creating high quality publication oeuvres of individual researchers

    Full text link
    The study of science at the individual micro-level frequently requires the disambiguation of author names. The creation of author's publication oeuvres involves matching the list of unique author names to names used in publication databases. Despite recent progress in the development of unique author identifiers, e.g., ORCID, VIVO, or DAI, author disambiguation remains a key problem when it comes to large-scale bibliometric analysis using data from multiple databases. This study introduces and validates a new methodology called seed+expand for semi-automatic bibliographic data collection for a given set of individual authors. Specifically, we identify the oeuvre of a set of Dutch full professors during the period 1980-2011. In particular, we combine author records from the National Research Information System (NARCIS) with publication records from the Web of Science. Starting with an initial list of 8,378 names, we identify "seed publications" for each author using five different approaches. Subsequently, we "expand" the set of publication in three different approaches. The different approaches are compared and resulting oeuvres are evaluated on precision and recall using a "gold standard" dataset of authors for which verified publications in the period 2001-2010 are available.Comment: Paper accepted for the ISSI 2013, small changes in the text due to referee comments, one figure added (Fig 3

    Huddersfield Open Access Publishing

    Get PDF
    This paper presents the findings of the Huddersfield Open Access Publishing Project, a JISC funded project to develop a low cost, sustainable Open Access (OA) journal publishing platform using EPrints Institutional Repository software

    Access and Preservation in Archival Mass Digitization Projects

    Get PDF
    [Excerpt] In 2014, the Dalhousie University Archives began its first archival mass digitization project with the Elisabeth Mann Borgese fonds. The successful completion of this project required the project team to address both broad and specific technical and intellectual challenges, from rights management in an online access environment to the durability of the equipment used. To best understand the challenges faced, there will first be a brief introduction to the fonds and project goals of balancing preservation and access before moving on to a discussion of these challenges in further detail, and finally, concluding with a discussion of some considerations, best practices, and lessons learned from this project

    Status and potential of bacterial genomics for public health practice : a scoping review

    Get PDF
    Background: Next-generation sequencing (NGS) is increasingly being translated into routine public health practice, affecting the surveillance and control of many pathogens. The purpose of this scoping review is to identify and characterize the recent literature concerning the application of bacterial pathogen genomics for public health practice and to assess the added value, challenges, and needs related to its implementation from an epidemiologist’s perspective. Methods: In this scoping review, a systematic PubMed search with forward and backward snowballing was performed to identify manuscripts in English published between January 2015 and September 2018. Included studies had to describe the application of NGS on bacterial isolates within a public health setting. The studied pathogen, year of publication, country, number of isolates, sampling fraction, setting, public health application, study aim, level of implementation, time orientation of the NGS analyses, and key findings were extracted from each study. Due to a large heterogeneity of settings, applications, pathogens, and study measurements, a descriptive narrative synthesis of the eligible studies was performed. Results: Out of the 275 included articles, 164 were outbreak investigations, 70 focused on strategy-oriented surveillance, and 41 on control-oriented surveillance. Main applications included the use of whole-genome sequencing (WGS) data for (1) source tracing, (2) early outbreak detection, (3) unraveling transmission dynamics, (4) monitoring drug resistance, (5) detecting cross-border transmission events, (6) identifying the emergence of strains with enhanced virulence or zoonotic potential, and (7) assessing the impact of prevention and control programs. The superior resolution over conventional typing methods to infer transmission routes was reported as an added value, as well as the ability to simultaneously characterize the resistome and virulome of the studied pathogen. However, the full potential of pathogen genomics can only be reached through its integration with high-quality contextual data. Conclusions: For several pathogens, it is time for a shift from proof-of-concept studies to routine use of WGS during outbreak investigations and surveillance activities. However, some implementation challenges from the epidemiologist’s perspective remain, such as data integration, quality of contextual data, sampling strategies, and meaningful interpretations. Interdisciplinary, inter-sectoral, and international collaborations are key for an appropriate genomics-informed surveillance

    Workset Creation for Scholarly Analysis: Recommendations and Prototyping Project Reports

    Get PDF
    This document assembles and describes the outcomes of the four prototyping projects undertaken as part of the Workset Creation for Scholarly Analysis (WCSA) research project (2013 – 2015). Each prototyping project team provided its own final report. These reports are assembled together and included in this document. Based on the totality of results reported, the WCSA project team also provide a set of overarching recommendations for HTRC implementation and adoption of research conducted by the Prototyping Project teams. The work described here was made possible through the generous support of The Andrew W. Mellon Foundation (Grant Ref # 21300666).The Andrew W. Mellon Foundation (Grant Ref # 21300666)Ope

    Which clinical and laboratory procedures should be used to fabricate digital complete dentures? A systematic review.

    Get PDF
    STATEMENT OF PROBLEM Digital workflows for digital complete denture fabrication have a variety of clinical and laboratory procedures, but their outcomes and associated complications are currently unknown. PURPOSE The purpose of this systematic review was to evaluate the clinical and laboratory procedures for digital complete dentures, their outcomes, and associated complications. MATERIAL AND METHODS Electronic literature searches were conducted on PubMed/Medline, Embase, and Web of Science for studies published from January 2000 to September 2022 and screened by 2 independent reviewers. Information on digital complete denture procedures, materials, their outcomes, and associated complications was extracted. RESULTS Of 266 screened studies, 39 studies were included. While 26 assessed definitive complete dentures, 7 studies assessed denture bases, 2 assessed trial dentures, and 4 assessed the digital images only. Twenty-four studies used border molded impression technique, 3 studies used a facebow record, and 7 studies used gothic arch tracing. Only 13 studies performed trial denture placement. Twenty-one studies used milling, and 17 studies used 3D printing for denture fabrication. One study reported that the retention of maxillary denture bases fabricated from a border-molded impression (14.5 to 16.1 N) was statistically higher than the retention of those fabricated from intraoral scanning (6.2 to 6.6 N). The maximum occlusal force of digital complete denture wearers was similar across different fabrication procedures. When compared with the conventional workflow, digital complete dentures required statistically shorter clinical time with 205 to 233 minutes saved. Up to 37.5% of participants reported loss of retention and up to 31.3% required a denture remake. In general, ≥1 extra visit and 1 to 4 unscheduled follow-up visits were needed. The outcomes for patient satisfaction and oral health-related quality of life were similar between conventional, milled, and 3D-printed complete dentures. CONCLUSIONS Making a border-molded impression is still preferred for better retention, and trial denture placement is still recommended to optimize the fabrication of definitive digital complete dentures

    A-posteriori provenance-enabled linking of publications and datasets via crowdsourcing

    No full text
    This paper aims to share with the digital library community different opportunities to leverage crowdsourcing for a-posteriori capturing of dataset citation graphs. We describe a practical approach, which exploits one possible crowdsourcing technique to collect these graphs from domain experts and proposes their publication as Linked Data using the W3C PROV standard. Based on our findings from a study we ran during the USEWOD 2014 workshop, we propose a semi-automatic approach that generates metadata by leveraging information extraction as an additional step to crowdsourcing, to generate high-quality data citation graphs. Furthermore, we consider the design implications on our crowdsourcing approach when non-expert participants are involved in the process<br/

    Reflections on Infrastructures for Mining Nineteenth-Century Newspaper Data

    Get PDF
    In this study we compare and contrast our experiences (as historians and as digital humanities and information studies researchers) of seeking to mine large-scale historical datasets via university-based, high-performance computing infrastructures versus our experiences of using external, cloud-hosted platforms and tools to mine the same data. In particular, we reflect on our recent experiences in two large transnational digital humanities projects: Asymmetrical Encounters: E-Humanity Approaches to Reference Cultures in Europe, 1815–1992, which was funded by a Humanities in the European Research Area grant (2013–2016) and Oceanic Exchanges: Tracing Global Information Networks in Historical Newspaper Repositories 1840–1914, which was funded through the Transatlantic Partnership for Social Sciences and Humanities 2016 Digging into Data Challenge (2017–2019). As part of the research for both these projects we sought to mine the OCR text of nineteenth-century historical newspapers that had been mounted on UCL’s HighPerformance Computing Infrastructures from Gale’s TDM drives. We compare and contrast our experiences of this with our subsequent experiences of performing comparable tasks via Gale Digital Scholar Lab. We contextualise our experiences and observations within wider discourses and recommendations about infrastructural support for humanities-led analyses of large datasets and discuss the advantages and drawbacks of both approaches. We situate our discussions in the aforementioned infrastructural scenarios with reflections on the human experiences of undertaking this research, which represents a step change for many of those who work in the (digital) humanities. Finally, we conclude by discussing the public and private sector research investments that are needed to support further developments and to facilitate access to and critical interrogation of large-scale digital archive
    • …
    corecore