13,353 research outputs found

    Linking with Meaning: Ontological Hypertext for Scholars

    No full text
    The links in ontological hypermedia are defined according to the relationships between real-world objects. An ontology that models the significant objects in a scholar’s world can be used toward producing a consistently interlinked research literature. Currently the papers that are available online are mainly divided between subject- and publisher-specific archives, with little or no interoperability. This paper addresses the issue of ontological interlinking, presenting two experimental systems whose hypertext links embody ontologies based on the activities of researchers and scholars

    Community next steps for making globally unique identifiers work for biocollections data

    Get PDF
    Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided

    CiTO, the Citation Typing Ontology

    Get PDF
    CiTO, the Citation Typing Ontology, is an ontology for describing the nature of reference citations in scientific research articles and other scholarly works, both to other such publications and also to Web information resources, and for publishing these descriptions on the Semantic Web. Citation are described in terms of the factual and rhetorical relationships between citing publication and cited publication, the in-text and global citation frequencies of each cited work, and the nature of the cited work itself, including its publication and peer review status. This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks. The latest version of CiTO, which this paper describes, is CiTO Version 1.6, published on 19 March 2010. CiTO is written in the Web Ontology Language OWL, uses the namespace http://purl.org/net/cito/, and is available from http://purl.org/net/cito/. This site uses content negotiation to deliver to the user an OWLDoc Web version of the ontology if accessed via a Web browser, or the OWL ontology itself if accessed from an ontology management tool such as Protégé 4 (http://protege.stanford.edu/). Collaborative work is currently under way to harmonize CiTO with other ontologies describing bibliographies and the rhetorical structure of scientific discourse

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table

    To share or not to share: Publication and quality assurance of research data outputs. A report commissioned by the Research Information Network

    No full text
    A study on current practices with respect to data creation, use, sharing and publication in eight research disciplines (systems biology, genomics, astronomy, chemical crystallography, rural economy and land use, classics, climate science and social and public health science). The study looked at data creation and care, motivations for sharing data, discovery, access and usability of datasets and quality assurance of data in each discipline

    Unlocking the Digital Potential of Scholarly Monographs in 21st Century Research

    Get PDF
    Bargheer M, Dogan ZM, Horstmann W, Mertens M, Rapp A. Unlocking the Digital Potential of Scholarly Monographs in 21st Century Research. LIBER QUARTERLY. 2017;27(1):194-211

    The Enigma of Digitized Property A Tribute to John Perry Barlow

    Get PDF
    Compressive Sensing has attracted a lot of attention over the last decade within the areas of applied mathematics, computer science and electrical engineering because of it suggesting that we can sample a signal under the limit that traditional sampling theory provides. By then using dierent recovery algorithms we are able to, theoretically, recover the complete original signal even though we have taken very few samples to begin with. It has been proven that these recovery algorithms work best on signals that are highly compressible, meaning that the signals can have a sparse representation where the majority of the signal elements are close to zero. In this thesis we implement some of these recovery algorithms and investigate how these perform practically on a real video signal consisting of 300 sequential image frames. The video signal will be under sampled, using compressive sensing, and then recovered using two types of strategies, - One where no time correlation between successive frames is assumed, using the classical greedy algorithm Orthogonal Matching Pursuit (OMP) and a more robust, modied OMP called Predictive Orthogonal Matching Pursuit (PrOMP). - One newly developed algorithm, Dynamic Iterative Pursuit (DIP), which assumes and utilizes time correlation between successive frames. We then performance evaluate and compare these two strategies using the Peak Signal to Noise Ratio (PSNR) as a metric. We also provide visual results. Based on investigation of the data in the video signal, using a simple model for the time correlation and transition probabilities between dierent signal coecients in time, the DIP algorithm showed good recovery performance. The main results showed that DIP performed better and better over time and outperformed the PrOMP up to a maximum of 6 dB gain at half of the original sampling rate but performed slightly below the PrOMP in a smaller part of the video sequence where the correlation in time between successive frames in the original video sequence suddenly became weaker.Compressive sensing har blivit mer och mer uppmarksammat under det senaste decenniet inom forskningsomraden sasom tillampad matematik, datavetenskap och elektroteknik. En stor anledning till detta ar att dess teori innebar att det blir mojligt att sampla en signal under gransen som traditionell samplingsteori innebar. Genom att sen anvanda olika aterskapningsalgoritmer ar det anda teoretiskt mojligt att aterskapa den ursprungliga signalen. Det har visats sig att dessaaterskapningsalgoritmer funkar bast pa signaler som ar mycket kompressiva, vilket innebar att dessa signaler kan representeras glest i nagon doman dar merparten av signalens koecienter ar nara 0 i varde. I denna uppsats implementeras vissa av dessaaterskapningsalgoritmer och vi undersoker sedan hur dessa presterar i praktiken pa en riktig videosignal bestaende av 300 sekventiella bilder. Videosignalen kommer att undersamplas med compressive sensing och sen aterskapas genom att anvanda 2 typer av strategier, - En dar ingen tidskorrelation mellan successiva bilder i videosignalen antas genom att anvanda klassiska algoritmer sasom Orthogonal Matching Pursuit (OMP) och en mer robust, modierad OMP : Predictive Orthogonal Matching Pursuit (PrOMP). - En nyligen utvecklad algoritm, Dynamic Iterative Pursuit (DIP), som antar och nyttjar en tidskorrelation mellan successiva bilder i videosignalen. Vi utvarderar och jamfor prestandan i dessa tva olika typer av strategier genom att anvanda Peak Signal to Noise Ratio (PSNR) som jamforelseparameter. Vi ger ocksa visuella resultat fran videosekvensen. Baserat pa undersokning av data i videosignalen visade det sig, genom att anvanda enkla modeller, bade for tidskorrelationen och sannolikhetsfunktioner for vilka koecienter som ar aktiva vid varje tidpunkt, att DIP algoritmen visade battre prestanda an de tva andra tidsoberoende algoritmerna under visa tidsekvenser. Framforallt de sekvenser dar videosignalen inneholl starkare korrelation i tid. Som mest presterade DIP upp till 6 dB battre an OMP och PrOMP
    corecore