133,068 research outputs found

    Privacy-Preserving Data Integration for Health

    Get PDF
    The digital transformation of health processes has resulted in the collection of vast amounts of health-related data that presents significant potential to support medical research projects and improve the healthcare system. Many of these possibilities arise as a consequence of integrating data from different sources to create an accurate and unified representation of the underlying data and enable detailed data analysis that is not possible through any individual source. Achieving this vision requires the collection and processing of sensitive health-related data about individuals, thus privacy and confidentiality implications have to be considered. In this paper, I describe my doctoral research topic: the design and development of a novel Privacy-Preserving Data Integration (PPDI) framework which aims to effectively address the challenges and opportunities of integrating Big Health Data (BHD) while ensuring compliance with the General Data Protection Regulation (GDPR). The paper describes the planned methodology for implementing the PPDI process through the usage of data pseudonymization techniques and Privacy-Preserving Record Linkage (PPRL) methods and provides an overview of the new framework, which is based on the re-implementation of MOMIS towards a microservices architecture with added PPDI functionalities

    Managing long-term access to digital data objects: a metadata approach

    Get PDF
    As society becomes increasingly reliant on information technology for data exchange and long-term data storage the need for a system of data management to document and provide access to the 'societal memory' is becoming imperative. An examination of both the literature and current 'best Practice' underlines the absence to date of a proven universal conceptual basis to digital data preservation. The examination of differences in nature and sources of origin, between traditional 'print-based' and digital objects leads to a re-appraisal of current practices of data selection and preservation. The need to embrace past, present and future metadata developments in a rapidly changing environment is considered. Various hypotheses were formulated and supported regarding; the similarities and differences required in selection criteria for different types of Digital Data Objects (DDOs), the ability to define universal threshold standards for a framework of metadata for digital data preservation, and the role of selection criteria in such a framework. The research uses Soft Systems Methodology to investigate the potential of the metadata concept as the key to universal data management. Semi-structured interviews were conducted to explore the attitudes of information professionals in the United Kingdom towards the challenges facing information-dependent organisations attempting to preserve digital data over the long-term. In particular, the nature of DDOs being encountered by stakeholders, the reasons, policies, and procedures for preserving them, together with a range of specific issues such as; the role of metadata, access to, and rights management of DDOs. The societal need for selection to ensure efficient long-term access is considered. Drawing on - SSM modelling, this research develops a flexible, long-term management framework for digital data at a level higher than metadata, with selection as an essential component. The framework's conceptual feasibility has been examined from both financial and societal benefit perspectives, together with the recognition of constraints. The super-metadata framework provides a possible systematic approach to managing a wide range of digital data in a variety of formats, created/owned by a spectrum of information-dependent organisations

    Integration of TLS-derived Bridge Information Modeling (BrIM) with a Decision Support System (DSS) for digital twinning and asset management of bridge infrastructures

    Get PDF
    In the current modern era of information and technology, the concept of Building Information Modeling (BIM), has made revolutionary changes in different aspects of engineering design, construction, monitoring, and management of infrastructure assets, especially bridges. In the field of bridge engineering, Bridge Information Modeling (BrIM), as a specific form of BIM, includes digital twinning of bridge assets associated with geometrical information and non-geometrical inspection data. BrIM has demonstrated tremendous potential in substituting traditional paper-based documentation and handwritten reports with digital bridge documentation/trans-formation, allowing professionals and managers to execute bridge management more efficiently and effectively. However, concerns remain about the quality of the acquired data in BrIM development, as well as lack of research on utilizing these information for remedial actions/decisions in a reliable Bridge Management System (BMS), which are mainly reliant on the knowledge and experience of the involved inspectors, or asset managers, and are susceptible to a certain degree of subjectivity. To address these concerns, this research paper presents a comprehensive methodology as an advanced asset management system that employs BrIM data to improve and facilitate the BMS. This innovative BMS is comprised of a precise Terrestrial Laser Scan (TLS)-derived BrIM as a qualitative digital replica of the existing bridge, incorporating geometrical and non-geometrical information of the bridge elements, and equipped with a requirement-driven framework in a redeveloped condition assessment model for priority ranking of bridge elements based on their health condition. In another step ahead, the proposed BMS integrates a Decision Support System (DSS) to score the feasible remedial strategies and provide more objective decisions for optimal budget allocation and remedial planning. This methodology was further implemented via a developed BrIM-oriented BMS plugin and validated through a real case study on the Werrington Bridge, a cable-stayed bridge in New South Wales, Australia. The finding of this research confirms the reliability of BrIM-oriented BMS implementation and the integration of proposed DSS for priority ranking of bridge elements that require more attention based on their structural importance and material vulnerability, as well as optimizing remedial actions in a practical way while preserving the bridge in a safe and healthy condition

    Economics and Engineering for Preserving Digital Content

    Get PDF
    Progress towards practical long-term preservation seems to be stalled. Preservationists cannot afford specially developed technology, but must exploit what is created for the marketplace. Economic and technical facts suggest that most preservation ork should be shifted from repository institutions to information producers and consumers. Prior publications describe solutions for all known conceptual challenges of preserving a single digital object, but do not deal with software development or scaling to large collections. Much of the document handling software needed is available. It has, however, not yet been selected, adapted, integrated, or deployed for digital preservation. The daily tools of both information producers and information consumers can be extended to embed preservation packaging without much burdening these users. We describe a practical strategy for detailed design and implementation. Document handling is intrinsically complicated because of human sensitivity to communication nuances. Our engineering section therefore starts by discussing how project managers can master the many pertinent details.

    Critique of Architectures for Long-Term Digital Preservation

    Get PDF
    Evolving technology and fading human memory threaten the long-term intelligibility of many kinds of documents. Furthermore, some records are susceptible to improper alterations that make them untrustworthy. Trusted Digital Repositories (TDRs) and Trustworthy Digital Objects (TDOs) seem to be the only broadly applicable digital preservation methodologies proposed. We argue that the TDR approach has shortfalls as a method for long-term digital preservation of sensitive information. Comparison of TDR and TDO methodologies suggests differentiating near-term preservation measures from what is needed for the long term. TDO methodology addresses these needs, providing for making digital documents durably intelligible. It uses EDP standards for a few file formats and XML structures for text documents. For other information formats, intelligibility is assured by using a virtual computer. To protect sensitive information—content whose inappropriate alteration might mislead its readers, the integrity and authenticity of each TDO is made testable by embedded public-key cryptographic message digests and signatures. Key authenticity is protected recursively in a social hierarchy. The proper focus for long-term preservation technology is signed packages that each combine a record collection with its metadata and that also bind context—Trustworthy Digital Objects.

    Interpretable Machine Learning for Privacy-Preserving Pervasive Systems

    Get PDF
    Our everyday interactions with pervasive systems generate traces that capture various aspects of human behavior and enable machine learning algorithms to extract latent information about users. In this paper, we propose a machine learning interpretability framework that enables users to understand how these generated traces violate their privacy

    A Computation Core for Communication Refinement of Digital Signal Processing Algorithms

    Get PDF
    International audienceThe most popular Moore's law formulation, which states the number of transistors on integrated circuits doubles every 18 months, is said to hold for at least another two decades. According to this prediction, if we want to take advantage of technological evolutions, designer's productivity has to increase in the same proportions. To take up this challenge, system level design solutions have been set up, but many efforts have still to be done on system modelling and synthesis. In this paper we propose a computation core synthesis methodology that can be integrated on the communication refinement steps of electronic system level design tools. In the proposed approach, computation cores used for digital signal processing application specifications relying on coarse grain communications and synchronizations (e.g. matrix) can be refined into computation cores which can handle fine grain communications and synchronizations (e.g. scalar). Its originality is its ability to synthesize computation cores which can handle fine grain data consumptions and productions which respect the intrinsic partial orders of the algorithms while preserving their original functionalities. Such cores can be used to model fine grain input output overlapping or iteration pipelining. Our flow is based on the analysis of a fine grain signal flow graph used to extract fine grain synchronizations and algorithmic expressions

    Modelling long term digital preservation costs: a scientific data case study

    Get PDF
    In recent years there has been increasing UK Government pressure on publicly funded researchers to plan the preservation and ensure the accessibility of their data for the long term. A critical challenge in implementing a digital preservation strategy is the estimation of such a programme’s cost. This pa-per presents a case study based on the cost estimation of preserving scientific data produced in the ISIS facility based at The Science and Technology Facilities Council (STFC) Rutherford Appleton Laboratory UK. The model for cost estimation for long term digital preservation is presented along with an outline of the development and validation activities undertaken as part of this project. The framework and methodology from this research provide an insight into the task of costing long term digital preservation processes, and can potentially be adapted to deliver benefits to other organisa-tions
    • …
    corecore