7,508 research outputs found

    Grid Databases for Shared Image Analysis in the MammoGrid Project

    Full text link
    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UKComment: 10 pages, 5 figure

    GLUE: a flexible software system for virus sequence data

    Get PDF
    Background: Virus genome sequences, generated in ever-higher volumes, can provide new scientific insights and inform our responses to epidemics and outbreaks. To facilitate interpretation, such data must be organised and processed within scalable computing resources that encapsulate virology expertise. GLUE (Genes Linked by Underlying Evolution) is a data-centric bioinformatics environment for building such resources. The GLUE core data schema organises sequence data along evolutionary lines, capturing not only nucleotide data but associated items such as alignments, genotype definitions, genome annotations and motifs. Its flexible design emphasises applicability to different viruses and to diverse needs within research, clinical or public health contexts. Results: HCV-GLUE is a case study GLUE resource for hepatitis C virus (HCV). It includes an interactive public web application providing sequence analysis in the form of a maximum-likelihood-based genotyping method, antiviral resistance detection and graphical sequence visualisation. HCV sequence data from GenBank is categorised and stored in a large-scale sequence alignment which is accessible via web-based queries. Whereas this web resource provides a range of basic functionality, the underlying GLUE project can also be downloaded and extended by bioinformaticians addressing more advanced questions. Conclusion: GLUE can be used to rapidly develop virus sequence data resources with public health, research and clinical applications. This streamlined approach, with its focus on reuse, will help realise the full value of virus sequence data

    Query Constraining Aspects of Knowledge

    Get PDF
    Proceedings of the 18th Nordic Conference of Computational Linguistics NODALIDA 2011. Editors: Bolette Sandford Pedersen, Gunta Nešpore and Inguna Skadiņa. NEALT Proceedings Series, Vol. 11 (2011), 279-282. © 2011 The editors and contributors. Published by Northern European Association for Language Technology (NEALT) http://omilia.uio.no/nealt . Electronically published at Tartu University Library (Estonia) http://hdl.handle.net/10062/16955

    Big Data Privacy Context: Literature Effects On Secure Informational Assets

    Get PDF
    This article's objective is the identification of research opportunities in the current big data privacy domain, evaluating literature effects on secure informational assets. Until now, no study has analyzed such relation. Its results can foster science, technologies and businesses. To achieve these objectives, a big data privacy Systematic Literature Review (SLR) is performed on the main scientific peer reviewed journals in Scopus database. Bibliometrics and text mining analysis complement the SLR. This study provides support to big data privacy researchers on: most and least researched themes, research novelty, most cited works and authors, themes evolution through time and many others. In addition, TOPSIS and VIKOR ranks were developed to evaluate literature effects versus informational assets indicators. Secure Internet Servers (SIS) was chosen as decision criteria. Results show that big data privacy literature is strongly focused on computational aspects. However, individuals, societies, organizations and governments face a technological change that has just started to be investigated, with growing concerns on law and regulation aspects. TOPSIS and VIKOR Ranks differed in several positions and the only consistent country between literature and SIS adoption is the United States. Countries in the lowest ranking positions represent future research opportunities.Comment: 21 pages, 9 figure

    Symbolic modeling of structural relationships in the Foundational Model of Anatomy

    Get PDF
    The need for a sharable resource that can provide deep anatomical knowledge and support inference for biomedical applications has recently been the driving force in the creation of biomedical ontologies. Previous attempts at the symbolic representation of anatomical relationships necessary for such ontologies have been largely limited to general partonomy and class subsumption. We propose an ontology of anatomical relationships beyond class assignments and generic part-whole relations and illustrate the inheritance of structural attributes in the Digital Anatomist Foundational Model of Anatomy. Our purpose is to generate a symbolic model that accommodates all structural relationships and physical properties required to comprehensively and explicitly describe the physical organization of the human body

    LinkEHR-Ed: A multi-reference model archetype editor based on formal semantics

    Full text link
    Purpose To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. Methods The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. Results LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. Conclusions LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.This work was supported in part by the Spanish Ministry of Education and Science under grant TS12007-66S7S-C02; by the Generalitat Valenciana under grant APOSTD/2007/055 and by the program PAID-06-07 de la Universidad Politecnica de Valencia.Maldonado Segura, JA.; Moner Cano, D.; Boscá Tomás, D.; Fernandez Breis, JT.; Angulo Fernández, C.; Robles Viejo, M. (2009). LinkEHR-Ed: A multi-reference model archetype editor based on formal semantics. International Journal of Medical Informatics. 78(8):559-570. https://doi.org/10.1016/j.ijmedinf.2009.03.006S55957078

    Knowledge formalization in experience feedback processes : an ontology-based approach

    Get PDF
    Because of the current trend of integration and interoperability of industrial systems, their size and complexity continue to grow making it more difficult to analyze, to understand and to solve the problems that happen in their organizations. Continuous improvement methodologies are powerful tools in order to understand and to solve problems, to control the effects of changes and finally to capitalize knowledge about changes and improvements. These tools involve suitably represent knowledge relating to the concerned system. Consequently, knowledge management (KM) is an increasingly important source of competitive advantage for organizations. Particularly, the capitalization and sharing of knowledge resulting from experience feedback are elements which play an essential role in the continuous improvement of industrial activities. In this paper, the contribution deals with semantic interoperability and relates to the structuring and the formalization of an experience feedback (EF) process aiming at transforming information or understanding gained by experience into explicit knowledge. The reuse of such knowledge has proved to have significant impact on achieving themissions of companies. However, the means of describing the knowledge objects of an experience generally remain informal. Based on an experience feedback process model and conceptual graphs, this paper takes domain ontology as a framework for the clarification of explicit knowledge and know-how, the aim of which is to get lessons learned descriptions that are significant, correct and applicable

    E-infrastructures fostering multi-centre collaborative research into the intensive care management of patients with brain injury

    Get PDF
    Clinical research is becoming ever more collaborative with multi-centre trials now a common practice. With this in mind, never has it been more important to have secure access to data and, in so doing, tackle the challenges of inter-organisational data access and usage. This is especially the case for research conducted within the brain injury domain due to the complicated multi-trauma nature of the disease with its associated complex collation of time-series data of varying resolution and quality. It is now widely accepted that advances in treatment within this group of patients will only be delivered if the technical infrastructures underpinning the collection and validation of multi-centre research data for clinical trials is improved. In recognition of this need, IT-based multi-centre e-Infrastructures such as the Brain Monitoring with Information Technology group (BrainIT - www.brainit.org) and Cooperative Study on Brain Injury Depolarisations (COSBID - www.cosbid.de) have been formed. A serious impediment to the effective implementation of these networks is access to the know-how and experience needed to install, deploy and manage security-oriented middleware systems that provide secure access to distributed hospital based datasets and especially the linkage of these data sets across sites. The recently funded EU framework VII ICT project Advanced Arterial Hypotension Adverse Event prediction through a Novel Bayesian Neural Network (AVERT-IT) is focused upon tackling these challenges. This chapter describes the problems inherent to data collection within the brain injury medical domain, the current IT-based solutions designed to address these problems and how they perform in practice. We outline how the authors have collaborated towards developing Grid solutions to address the major technical issues. Towards this end we describe a prototype solution which ultimately formed the basis for the AVERT-IT project. We describe the design of the underlying Grid infrastructure for AVERT-IT and how it will be used to produce novel approaches to data collection, data validation and clinical trial design is also presented
    corecore