1,945 research outputs found

    A Practical Framework for Storing and Searching Encrypted Data on Cloud Storage

    Full text link
    Security has become a significant concern with the increased popularity of cloud storage services. It comes with the vulnerability of being accessed by third parties. Security is one of the major hurdles in the cloud server for the user when the user data that reside in local storage is outsourced to the cloud. It has given rise to security concerns involved in data confidentiality even after the deletion of data from cloud storage. Though, it raises a serious problem when the encrypted data needs to be shared with more people than the data owner initially designated. However, searching on encrypted data is a fundamental issue in cloud storage. The method of searching over encrypted data represents a significant challenge in the cloud. Searchable encryption allows a cloud server to conduct a search over encrypted data on behalf of the data users without learning the underlying plaintexts. While many academic SE schemes show provable security, they usually expose some query information, making them less practical, weak in usability, and challenging to deploy. Also, sharing encrypted data with other authorized users must provide each document's secret key. However, this way has many limitations due to the difficulty of key management and distribution. We have designed the system using the existing cryptographic approaches, ensuring the search on encrypted data over the cloud. The primary focus of our proposed model is to ensure user privacy and security through a less computationally intensive, user-friendly system with a trusted third party entity. To demonstrate our proposed model, we have implemented a web application called CryptoSearch as an overlay system on top of a well-known cloud storage domain. It exhibits secure search on encrypted data with no compromise to the user-friendliness and the scheme's functional performance in real-world applications.Comment: 146 Pages, Master's Thesis, 6 Chapters, 96 Figures, 11 Table

    Secure monitoring system for industrial internet of things using searchable encryption, access control and machine learning

    Get PDF
    This thesis is an alternative format submission comprising a set of publications and a comprehensive literature review, an introduction, and a conclusion. Continuous compliance with data protection legislation on many levels in the Industrial Internet of Things (IIoT) is a significant challenge. Automated continuous compliance should also consider adaptable security compliance management for multiple users. The IIoT should automate compliance with corporate rules, regulations, and regulatory frameworks for industrial applications. Thus, this thesis aims to improve continuous compliance by introducing an edge-server architecture which incorporates searchable encryption with multi-authority access to provide access to useful data for various stakeholders in the compliance domain. In this thesis, we propose an edge lightweight searchable attribute-based encryption system (ELSA). The ELSA system leverages cloud-edge architecture to improve search time beyond a previous state-ofthe-art encryption solution. The main contributions of the first paper are as follows. First, we npresent an untrusted cloud and trusted edge architecture that processes data efficiently and optimises decision-making in the IIoT context. Second, we enhanced the search performance over the current state-of-the-art (LSABE-MA) regarding order of magnitude. We achieved this enhancement by storing keywords only on the trusted edge server and introducing a query optimiser to achieve better-than-linear search performance. The query optimiser uses k-means clustering to improve the efficiency of range queries, removing the need for a linear search. As a result, we achieved higher performance without sacrificing result accuracy. In the second paper, we extended ELSA to illustrate the correlation between the number of keywords and ELSA performance. This extension supports annotating records with multiple keywords in trapdoor and record storage and enables the record to be returned with single keyword queries. In addition, the experiments demonstrated the scalability and efficiency of ELSA with an increasing number of keywords and complexity. Based on the experimental results and feedback received from the publication and presentation of this work, we published our third technical paper. In this paper, we improved ELSA by minimising the lookup table size and summarising the data records by integrating machine-learning (ML) methods suitable for execution at the edge. This integration removes records of unnecessary data by evaluating added value to further processing. This process results in the minimisation of the lookup table size, the cloud storage, and the network traffic, taking full advantage of the edge architecture benefits. We demonstrated the mini-ELSA expanded method on two well-known IIoT datasets. Our results reveal a reduction of storage requirements by > 21% while improving execution time by > 1.39× and search time by > 50% and maintaining an optimal balance between prediction accuracy and space reduction. In addition, we present the computational complexity analysis that reinforces these experimental results

    The Potential for Machine Learning Analysis over Encrypted Data in Cloud-based Clinical Decision Support - Background and Review

    Get PDF
    This paper appeared at the 8th Australasian Workshop on Health Informatics and Knowledge Management (HIKM 2015), Sydney, Australia, January 2015. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 164, Anthony Maeder and Jim Warren, Ed. Reproduction for academic, not-for profit purposes permitted provided this text is includedIn an effort to reduce the risk of sensitive data exposure in untrusted networks such as the public cloud, increasing attention has recently been given to encryption schemes that allow specific computations to occur on encrypted data, without the need for decryption. This relies on the fact that some encryption algorithms display the property of homomorphism, which allows them to manipulate data in a meaningful way while still in encrypted form. Such a framework would find particular relevance in Clinical Decision Support (CDS) applications deployed in the public cloud. CDS applications have an important computational and analytical role over confidential healthcare information with the aim of supporting decision-making in clinical practice. This review paper examines the history and current status of homomoprhic encryption and its potential for preserving the privacy of patient data underpinning cloud-based CDS applications

    Privacy in the Genomic Era

    Get PDF
    Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward

    Standard operating procedures (SOPS) for health and demographic research data quality assurance: the case of VADU HDSS site

    Get PDF
    A research report submitted to the Faculty of Health Sciences, University of the Witwatersrand in partial fulfilment of the requirements for the degree of Masters of Science In Epidemiology (Research Data Management) Johannesburg, September 2016The idea of data quality assurance and security control is to monitor the quality of research data generated from any research activity. This consists of a thorough collection of documentation regarding all aspects of the research. Data management procedures of health and demographic research constantly changes or emerges through the iterative processes of data collection and analysis and requires that the investigator make frequent decisions that can alter the course of the study. As a result, audit trails that provides justification for these actions will be vital for future analysis. The audit trail provides a mechanism for retroactive assessment of the conduct of the inquiry and a means to address issues related to authenticity of the research datasets. This research seeks to develop an Information Assurance Policy and Standard Operating Procedures for Vadu Health and Demographic Surveillance System Site using ISACA/COBIT 5 family products and ISO/IEC ISMS as benchmark. The work proposes data assurance and security controls and measures for any given research project. To develop such SOP, there is a need to identify existing gaps and inconsistencies within the data management life cycle at VRHP site. This will allow us to establish the areas of focus for the SOP. We used an interview-based approach to identify the existing gaps associated with data management life cycle at VRHP site. The study population included key members of the data management team. The study was conducted utilizing a self-administered questionnaire with structured and open ended questions. Purposive sampling method used to enrol 21 data management team members consisting of 13 Field Research Assistants, 4 Field Research Supervisors, 1 Field Coordinator, 1 Software Application Developer, 1 Head of Data Management and 1 Data Manager. Unstructured interviews were conducted to gather information on respective roles and responsibilities of the members to ensure maximum open interactions. Data gathering and analyses were done concurrently. Two themes arose from the data: Current lapses in data collection at Vadu HDSS and current lapses in data management at Vadu HDSS. The response rate was 95.5%. We adopted the ISACA/COBIT 5 guidelines and ISO/IEC ISMS as benchmark to develop SOPs to guide data management life cycle activities in enforcing data quality assurance. We also included some guidelines that can be used in replicating the SOP at other research institution.MT201

    COSPO/CENDI Industry Day Conference

    Get PDF
    The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless

    Network Access Control: Disruptive Technology?

    Get PDF
    Network Access Control (NAC) implements policy-based access control to the trusted network. It regulates entry to the network by the use of health verifiers and policy control points to mitigate the introduction of malicious software. However the current versions of NAC may not be the universal remedy to endpoint security that many vendors tout. Many organizations that are evaluating the technology, but that have not yet deployed a solution, believe that NAC presents an opportunity for severe disruption of their networks. A cursory examination of the technologies used and how they are deployed in the network appears to support this argument. The addition of NAC components can make the network architecture even more complex and subject to failure. However, one recent survey of organizations that have deployed a NAC solution indicates that the \u27common wisdom\u27 about NAC may not be correct

    Data-CASE: Grounding Data Regulations for Compliant Data Processing Systems

    Full text link
    Data regulations, such as GDPR, are increasingly being adopted globally to protect against unsafe data management practices. Such regulations are, often ambiguous (with multiple valid interpretations) when it comes to defining the expected dynamic behavior of data processing systems. This paper argues that it is possible to represent regulations such as GDPR formally as invariants using a (small set of) data processing concepts that capture system behavior. When such concepts are grounded, i.e., they are provided with a single unambiguous interpretation, systems can achieve compliance by demonstrating that the system-actions they implement maintain the invariants (representing the regulations). To illustrate our vision, we propose Data-CASE, a simple yet powerful model that (a) captures key data processing concepts (b) a set of invariants that describe regulations in terms of these concepts. We further illustrate the concept of grounding using "deletion" as an example and highlight several ways in which end-users, companies, and software designers/engineers can use Data-CASE.Comment: To appear in EDBT '2

    Utilization of a Concurrent Query Form to Improve Clinical Documentation in a VA Facility for Patients With Stroke or TIA

    Get PDF
    Caring for stroke patients diagnosed with acute ischemic stroke (AIS) and transient ischemic attack (TIA) at Veterans Health Administration (VHA) acute care hospitals is a very complex process that centers on accurate documentation. Inaccurate or missing documentation leads to patient safety issues, lower quality care, and inaccurate Veteran Equitable Resource Allocation (VERA) classification for reimbursement. This pilot project’s 3 problems of interest include improving provider response to clinical queries about documentation, capturing national metrics collected by the VHA, and accurately representing veterans in VERA classification. Based on a review of the literature available on patient treatment file (PTF) accuracy and clinical documentation improvement, the researcher used a three-pronged intervention for data collection and management plan. The data were abstracted from 97 (N = 97) AIS and TIA patient treatment files from calendar years 2015 to 2019, then compared with prospective data collected for a period of 3 months, and analyzed for statistical and clinical significance. The results of this pilot project included an increase in provider response to queries, captured metrics, and VERA classification of veterans that satisfies clinical documentation integrity according to VHA directives. Keywords: RN-led CDI program, clinical documentation improvement specialist, clinical and financial CDI outcomes, clinical documentation improvement model
    • …
    corecore