95 research outputs found

    Public Evidence from Secret Ballots

    Full text link
    Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.Comment: To appear in E-Vote-Id '1

    GOPHER, an HPC framework for large scale graph exploration and inference

    Get PDF
    Biological ontologies, such as the Human Phenotype Ontology (HPO) and the Gene Ontology (GO), are extensively used in biomedical research to investigate the complex relationship that exists between the phenome and the genome. The interpretation of the encoded information requires methods that efficiently interoperate between multiple ontologies providing molecular details of disease-related features. To this aim, we present GenOtype PHenotype ExplOrer (GOPHER), a framework to infer associations between HPO and GO terms harnessing machine learning and large-scale parallelism and scalability in High-Performance Computing. The method enables to map genotypic features to phenotypic features thus providing a valid tool for bridging functional and pathological annotations. GOPHER can improve the interpretation of molecular processes involved in pathological conditions, displaying a vast range of applications in biomedicine.This work has been developed with the support of the Severo Ochoa Program (SEV-2015-0493); the Spanish Ministry of Science and Innovation (TIN2015- 65316-P); and the Joint Study Agreement no. W156463 under the IBM/BSC Deep Learning Center agreement.Peer ReviewedPostprint (author's final draft

    Effect of floor type on the performance, physiological and behavioural responses of finishing beef steers

    Get PDF
    peer-reviewedBackground:The study objective was to investigate the effect of bare concrete slats (Control), two types of mats [(Easyfix mats (mat 1) and Irish Custom Extruder mats (mat 2)] fitted on top of concrete slats, and wood-chip to simulate deep bedding (wood-chip placed on top of a plastic membrane overlying the concrete slats) on performance, physiological and behavioral responses of finishing beef steers. One-hundred and forty-four finishing steers (503 kg; standard deviation 51.8 kg) were randomly assigned according to their breed (124 Continental cross and 20 Holstein–Friesian) and body weight to one of four treatments for 148 days. All steers were subjected to the same weighing, blood sampling (jugular venipuncture), dirt and hoof scoring pre study (day 0) and on days 23, 45, 65, 86, 107, 128 and 148 of the study. Cameras were fitted over each pen for 72 h recording over five periods and subsequent 10 min sampling scans were analysed. Results: Live weight gain and carcass characteristics were similar among treatments. The number of lesions on the hooves of the animals was greater (P < 0.05) on mats 1 and 2 and wood-chip treatments compared with the animals on the slats. Dirt scores were similar for the mat and slat treatments while the wood-chip treatment had greater dirt scores. Animals housed on either slats or wood-chip had similar lying times. The percent of animals lying was greater for animals housed on mat 1 and mat 2 compared with those housed on concrete slats and wood chips. Physiological variables showed no significant difference among treatments. Conclusions: In this exploratory study, the performance or welfare of steers was not adversely affected by slats, differing mat types or wood-chip as underfoot material

    Francisella tularensis novicida proteomic and transcriptomic data integration and annotation based on semantic web technologies

    Get PDF
    This paper summarises the lessons and experiences gained from a case study of the application of semantic web technologies to the integration of data from the bacterial species Francisella tularensis novicida (Fn). Fn data sources are disparate and heterogeneous, as multiple laboratories across the world, using multiple technologies, perform experiments to understand the mechanism of virulence. It is hard to integrate these data sources in a flexible manner that allows new experimental data to be added and compared when required

    Facilitating the development of controlled vocabularies for metabolomics technologies with text mining

    Get PDF
    BACKGROUND: Many bioinformatics applications rely on controlled vocabularies or ontologies to consistently interpret and seamlessly integrate information scattered across public resources. Experimental data sets from metabolomics studies need to be integrated with one another, but also with data produced by other types of omics studies in the spirit of systems biology, hence the pressing need for vocabularies and ontologies in metabolomics. However, it is time-consuming and non trivial to construct these resources manually. RESULTS: We describe a methodology for rapid development of controlled vocabularies, a study originally motivated by the needs for vocabularies describing metabolomics technologies. We present case studies involving two controlled vocabularies (for nuclear magnetic resonance spectroscopy and gas chromatography) whose development is currently underway as part of the Metabolomics Standards Initiative. The initial vocabularies were compiled manually, providing a total of 243 and 152 terms. A total of 5,699 and 2,612 new terms were acquired automatically from the literature. The analysis of the results showed that full-text articles (especially the Materials and Methods sections) are the major source of technology-specific terms as opposed to paper abstracts. CONCLUSIONS: We suggest a text mining method for efficient corpus-based term acquisition as a way of rapidly expanding a set of controlled vocabularies with the terms used in the scientific literature. We adopted an integrative approach, combining relatively generic software and data resources for time- and cost-effective development of a text mining tool for expansion of controlled vocabularies across various domains, as a practical alternative to both manual term collection and tailor-made named entity recognition methods

    Design and utilization of epitope-based databases and predictive tools

    Get PDF
    In the last decade, significant progress has been made in expanding the scope and depth of publicly available immunological databases and online analysis resources, which have become an integral part of the repertoire of tools available to the scientific community for basic and applied research. Herein, we present a general overview of different resources and databases currently available. Because of our association with the Immune Epitope Database and Analysis Resource, this resource is reviewed in more detail. Our review includes aspects such as the development of formal ontologies and the type and breadth of analytical tools available to predict epitopes and analyze immune epitope data. A common feature of immunological databases is the requirement to host large amounts of data extracted from disparate sources. Accordingly, we discuss and review processes to curate the immunological literature, as well as examples of how the curated data can be used to generate a meta-analysis of the epitope knowledge currently available for diseases of worldwide concern, such as influenza and malaria. Finally, we review the impact of immunological databases, by analyzing their usage and citations, and by categorizing the type of citations. Taken together, the results highlight the growing impact and utility of immunological databases for the scientific community

    Functional impairment of systemic scleroderma patients with digital ulcerations: Results from the DUO registry

    Get PDF

    Demographic, clinical and antibody characteristics of patients with digital ulcers in systemic sclerosis: data from the DUO Registry

    Get PDF
    OBJECTIVES: The Digital Ulcers Outcome (DUO) Registry was designed to describe the clinical and antibody characteristics, disease course and outcomes of patients with digital ulcers associated with systemic sclerosis (SSc). METHODS: The DUO Registry is a European, prospective, multicentre, observational, registry of SSc patients with ongoing digital ulcer disease, irrespective of treatment regimen. Data collected included demographics, SSc duration, SSc subset, internal organ manifestations, autoantibodies, previous and ongoing interventions and complications related to digital ulcers. RESULTS: Up to 19 November 2010 a total of 2439 patients had enrolled into the registry. Most were classified as either limited cutaneous SSc (lcSSc; 52.2%) or diffuse cutaneous SSc (dcSSc; 36.9%). Digital ulcers developed earlier in patients with dcSSc compared with lcSSc. Almost all patients (95.7%) tested positive for antinuclear antibodies, 45.2% for anti-scleroderma-70 and 43.6% for anticentromere antibodies (ACA). The first digital ulcer in the anti-scleroderma-70-positive patient cohort occurred approximately 5 years earlier than the ACA-positive patient group. CONCLUSIONS: This study provides data from a large cohort of SSc patients with a history of digital ulcers. The early occurrence and high frequency of digital ulcer complications are especially seen in patients with dcSSc and/or anti-scleroderma-70 antibodies

    Drug-induced amino acid deprivation as strategy for cancer therapy

    Full text link
    corecore