402 research outputs found

    A novel Hash-Based File Clustering scheme for efficient distributing, storing and retrieving of large scale Health Records

    Full text link
    Cloud computing has been adopted as an efficient computing infrastructure model for provisioning resources and providing services to users. Several distributed resource models such as Hadoop and parallel databases have been deployed in healthcare-related services to manage electronic health records (EHR). However, these models are inefficient for managing a large number of small files and hence they are not widely deployed in Healthcare Information Systems. This paper proposed a novel Hash-Based File Clustering Scheme (HBFC) to distribute, store and retrieve EHR efficiently in cloud environments. The HBFC possesses two distinctive features: it utilizes hashing to distribute files into clusters in a control way and it utilizes P2P structures for data management. HBFC scheme is demonstrated to be effective in handling big health data that comprises of a large number of small files in various formats. It allows users to retrieve and access data records efficiently. The initial implementation results demonstrate that the proposed scheme outperforms original P2P system in term of data lookup latency

    Protection and efficient management of big health data in cloud environment

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Healthcare data has become a great concern in the academic world and in industry. The deployment of electronic health records (EHRs) and healthcare-related services on cloud platforms will reduce the cost and complexity of handling and integrating medical records while improving efficiency and accuracy. To make effective use of advanced features such as high availability, reliability, and scalability of Cloud services, EHRs have to be stored in the clouds. By exposing EHRs in an outsourced environment, however, a number of serious issues related to data security and privacy, distribution and processing such as the loss of the controllability, different data formats and sizes, the leakage of sensitive information in processing, sensitive-delay requirements has been naturally raised. Many attempts have been made to address the above concerns, but most of the attempts tackled only some aspects of the problem. Encryption mechanisms can resolve the data security and privacy requirements but introduce intensive computing overheads as well as complexity in key distribution. Data is not guaranteed being protected when it is moved from one cloud to another because clouds may not use equivalent protection schemes. Sensitive data is being processed at only private clouds without sufficient resources. Consequently, Cloud computing has not been widely adopted by healthcare providers and users. Protecting and managing health data efficiently in many aspects is still an open question for current research. In this dissertation, we investigate data security and efficient management of big health data in cloud environments. Regarding data security, we establish an active data protection framework to protect data; we investigate a new approach for data mobility; we propose trusted evaluation for cloud resources in processing sensitive data. For efficient management, we investigate novel schemes and models in both Cloud computing and Fog computing for data distribution and data processing to handle the rapid growth of data, higher security on demand, and delay requirements. The novelty of this work lies in the novel data mobility management model for data protection, the efficient distribution scheme for a large-scale of EHRs, and the trust-based scheme in security and processing. The contributions of this thesis can be summarized according to data security and efficient data management. On data security, we propose a data mobility management model to protect data when it is stored and moved in clouds. We suggest a trust-based scheduling scheme for big data processing with MapReduce to fulfil both privacy and performance issues in a cloud environment. • The data mobility management introduces a new location data structure into an active data framework, a Location Registration Database (LRD), protocols for establishing a clone supervisor and a Mobility Service (MS) to handle security and privacy requirements effectively. The model proposes a novel security approach for data mobility and leads to the introduction of a new Data Mobility as a Service (DMaaS) in the Cloud. • The Trust-based scheduling scheme investigates a novel composite trust metric and a real-time trust evaluation for cloud resources to provide the highest trust execution on sensitive data. The proposed scheme introduces a new approach for big data processing to meet with high security requirements. On the efficient data management, we propose a novel Hash-Based File Clustering (HBFC) scheme and data replication management model to distribute, store and retrieve EHRs efficiently. We propose a data protection model and a task scheduling scheme which is Region-based for Fog and Cloud to address security and local performance issues. • The HBFC scheme innovatively utilizes hash functions to cluster files in defined clusters such that data can be stored and retrieved quickly while maintaining the workload balance efficiently. The scheme introduces a new clustering mechanism in managing a large-scale of EHRs to deliver healthcare services effectively in the cloud environment. • The trust-based scheduling model uses the proposed trust metric for task scheduling with MapReduce. It not only provides maximum trust execution but also increases resource utilization significantly. The model suggests a new trust-oriented scheduling mechanism between tasks and resources with MapReduce. • We introduce a novel concept “Region” in Fog computing to handle the data security and local performance issues effectively. The proposed model provides a novel Fog-based Region approach to handle security and local performance requirements. We implement and evaluate our proposed models and schemes intensively based on both real infrastructures and simulators. The outcomes demonstrate the feasibility and the efficiency of our research in this thesis. By proposing innovative concepts, metrics, algorithms, models, and services, the significant contributions of this thesis enable both healthcare providers and users to adopt cloud services widely, and allow significant improvements in providing better healthcare services

    Maintaining privacy for a recommender system diagnosis using blockchain and deep learning.

    Get PDF
    The healthcare sector has been revolutionized by Blockchain and AI technologies. Artificial intelligence uses algorithms, recommender systems, decision-making abilities, and big data to display a patient's health records using blockchain. Healthcare professionals can make use of Blockchain to display a patient's medical records with a secured medical diagnostic process. Traditionally, data owners have been hesitant to share medical and personal information due to concerns about privacy and trustworthiness. Using Blockchain technology, this paper presents an innovative model for integrating healthcare data sharing into a recommender diagnostic computer system. Using the model, medical records can be secured, controlled, authenticated, and kept confidential. In this paper, researchers propose a framework for using the Ethereum Blockchain and x-rays as a mechanism for access control, establishing hierarchical identities, and using pre-processing and deep learning to diagnose COVID-19. Along with solving the challenges associated with centralized access control systems, this mechanism also ensures data transparency and traceability, which will allow for efficient diagnosis and secure data sharing

    Practical Isolated Searchable Encryption in a Trusted Computing Environment

    Get PDF
    Cloud computing has become a standard computational paradigm due its numerous advantages, including high availability, elasticity, and ubiquity. Both individual users and companies are adopting more of its services, but not without loss of privacy and control. Outsourcing data and computations to a remote server implies trusting its owners, a problem many end-users are aware. Recent news have proven data stored on Cloud servers is susceptible to leaks from the provider, third-party attackers, or even from government surveillance programs, exposing users’ private data. Different approaches to tackle these problems have surfaced throughout the years. Naïve solutions involve storing data encrypted on the server, decrypting it only on the client-side. Yet, this imposes a high overhead on the client, rendering such schemes impractical. Searchable Symmetric Encryption (SSE) has emerged as a novel research topic in recent years, allowing efficient querying and updating over encrypted datastores in Cloud servers, while retaining privacy guarantees. Still, despite relevant recent advances, existing SSE schemes still make a critical trade-off between efficiency, security, and query expressiveness, thus limiting their adoption as a viable technology, particularly in large-scale scenarios. New technologies providing Isolated Execution Environments (IEEs) may help improve SSE literature. These technologies allow applications to be run remotely with privacy guarantees, in isolation from other, possibly privileged, processes inside the CPU, such as the operating system kernel. Prominent example technologies are Intel SGX and ARM TrustZone, which are being made available in today’s commodity CPUs. In this thesis we study these new trusted hardware technologies in depth, while exploring their application to the problem of searching over encrypted data, primarily focusing in SGX. In more detail, we study the application of IEEs in SSE schemes, improving their efficiency, security, and query expressiveness. We design, implement, and evaluate three new SSE schemes for different query types, namely Boolean queries over text, similarity queries over image datastores, and multimodal queries over text and images. These schemes can support queries combining different media formats simultaneously, envisaging applications such as privacy-enhanced medical diagnosis and management of electronic-healthcare records, or confidential photograph catalogues, running without the danger of privacy breaks in Cloud-based provisioned services

    Blockchain for secured IoT and D2D applications over 5G cellular networks : a thesis by publications presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer and Electronics Engineering, Massey University, Albany, New Zealand

    Get PDF
    Author's Declaration: "In accordance with Sensors, SpringerOpen, and IEEE’s copyright policy, this thesis contains the accepted and published version of each manuscript as the final version. Consequently, the content is identical to the published versions."The Internet of things (IoT) is in continuous development with ever-growing popularity. It brings significant benefits through enabling humans and the physical world to interact using various technologies from small sensors to cloud computing. IoT devices and networks are appealing targets of various cyber attacks and can be hampered by malicious intervening attackers if the IoT is not appropriately protected. However, IoT security and privacy remain a major challenge due to characteristics of the IoT, such as heterogeneity, scalability, nature of the data, and operation in open environments. Moreover, many existing cloud-based solutions for IoT security rely on central remote servers over vulnerable Internet connections. The decentralized and distributed nature of blockchain technology has attracted significant attention as a suitable solution to tackle the security and privacy concerns of the IoT and device-to-device (D2D) communication. This thesis explores the possible adoption of blockchain technology to address the security and privacy challenges of the IoT under the 5G cellular system. This thesis makes four novel contributions. First, a Multi-layer Blockchain Security (MBS) model is proposed to protect IoT networks while simplifying the implementation of blockchain technology. The concept of clustering is utilized to facilitate multi-layer architecture deployment and increase scalability. The K-unknown clusters are formed within the IoT network by applying a hybrid Evolutionary Computation Algorithm using Simulated Annealing (SA) and Genetic Algorithms (GA) to structure the overlay nodes. The open-source Hyperledger Fabric (HLF) Blockchain platform is deployed for the proposed model development. Base stations adopt a global blockchain approach to communicate with each other securely. The quantitative arguments demonstrate that the proposed clustering algorithm performs well when compared to the earlier reported methods. The proposed lightweight blockchain model is also better suited to balance network latency and throughput compared to a traditional global blockchain. Next, a model is proposed to integrate IoT systems and blockchain by implementing the permissioned blockchain Hyperledger Fabric. The security of the edge computing devices is provided by employing a local authentication process. A lightweight mutual authentication and authorization solution is proposed to ensure the security of tiny IoT devices within the ecosystem. In addition, the proposed model provides traceability for the data generated by the IoT devices. The performance of the proposed model is validated with practical implementation by measuring performance metrics such as transaction throughput and latency, resource consumption, and network use. The results indicate that the proposed platform with the HLF implementation is promising for the security of resource-constrained IoT devices and is scalable for deployment in various IoT scenarios. Despite the increasing development of blockchain platforms, there is still no comprehensive method for adopting blockchain technology on IoT systems due to the blockchain's limited capability to process substantial transaction requests from a massive number of IoT devices. The Fabric comprises various components such as smart contracts, peers, endorsers, validators, committers, and Orderers. A comprehensive empirical model is proposed that measures HLF's performance and identifies potential performance bottlenecks to better meet blockchain-based IoT applications' requirements. The implementation of HLF on distributed large-scale IoT systems is proposed. The performance of the HLF is evaluated in terms of throughput, latency, network sizes, scalability, and the number of peers serviceable by the platform. The experimental results demonstrate that the proposed framework can provide a detailed and real-time performance evaluation of blockchain systems for large-scale IoT applications. The diversity and the sheer increase in the number of connected IoT devices have brought significant concerns about storing and protecting the large IoT data volume. Dependencies of the centralized server solution impose significant trust issues and make it vulnerable to security risks. A layer-based distributed data storage design and implementation of a blockchain-enabled large-scale IoT system is proposed to mitigate these challenges by using the HLF platform for distributed ledger solutions. The need for a centralized server and third-party auditor is eliminated by leveraging HLF peers who perform transaction verification and records audits in a big data system with the help of blockchain technology. The HLF blockchain facilitates storing the lightweight verification tags on the blockchain ledger. In contrast, the actual metadata is stored in the off-chain big data system to reduce the communication overheads and enhance data integrity. Finally, experiments are conducted to evaluate the performance of the proposed scheme in terms of throughput, latency, communication, and computation costs. The results indicate the feasibility of the proposed solution to retrieve and store the provenance of large-scale IoT data within the big data ecosystem using the HLF blockchain

    Secure data sharing in cloud computing: a comprehensive review

    Get PDF
    Cloud Computing is an emerging technology, which relies on sharing computing resources. Sharing of data in the group is not secure as the cloud provider cannot be trusted. The fundamental difficulties in distributed computing of cloud suppliers is Data Security, Sharing, Resource scheduling and Energy consumption. Key-Aggregate cryptosystem used to secure private/public data in the cloud. This key is consistent size aggregate for adaptable decisions of ciphertext in cloud storage. Virtual Machines (VMs) provisioning is effectively empowered the cloud suppliers to effectively use their accessible resources and get higher benefits. The most effective method to share information resources among the individuals from the group in distributed storage is secure, flexible and efficient. Any data stored in different cloud data centers are corrupted, recovery using regenerative coding. Security is provided many techniques like Forward security, backward security, Key-Aggregate cryptosystem, Encryption and Re-encryption etc. The energy is reduced using Energy-Efficient Virtual Machines Scheduling in Multi-Tenant Data Centers

    Privacy preserving algorithms for newly emergent computing environments

    Get PDF
    Privacy preserving data usage ensures appropriate usage of data without compromising sensitive information. Data privacy is a primary requirement since customers' data is an asset to any organization and it contains customers' private information. Data seclusion cannot be a solution to keep data private. Data sharing as well as keeping data private is important for different purposes, e.g., company welfare, research, business etc. A broad range of industries where data privacy is mandatory includes healthcare, aviation industry, education system, federal law enforcement, etc.In this thesis dissertation we focus on data privacy schemes in emerging fields of computer science, namely, health informatics, data mining, distributed cloud, biometrics, and mobile payments. Linking and mining medical records across different medical service providers are important to the enhancement of health care quality. Under HIPAA regulation keeping medical records private is important. In real-world health care databases, records may well contain errors. Linking the error-prone data and preserving data privacy at the same time is very difficult. We introduce a privacy preserving Error-Tolerant Linking Algorithm to enable medical records linkage for error-prone medical records. Mining frequent sequential patterns such as, patient path, treatment pattern, etc., across multiple medical sites helps to improve health care quality and research. We propose a privacy preserving sequential pattern mining scheme across multiple medical sites. In a distributed cloud environment resources are provided by users who are geographically distributed over a large area. Since resources are provided by regular users, data privacy and security are main concerns. We propose a privacy preserving data storage mechanism among different users in a distributed cloud. Managing secret key for encryption is difficult in a distributed cloud. To protect secret key in a distributed cloud we propose a multilevel threshold secret sharing mechanism. Biometric authentication ensures user identity by means of user's biometric traits. Any individual's biometrics should be protected since biometrics are unique and can be stolen or misused by an adversary. We present a secure and privacy preserving biometric authentication scheme using watermarking technique. Mobile payments have become popular with the extensive use of mobile devices. Mobile applications for payments needs to be very secure to perform transactions and at the same time needs to be efficient. We design and develop a mobile application for secure mobile payments. To secure mobile payments we focus on user's biometric authentication as well as secure bank transaction. We propose a novel privacy preserving biometric authentication algorithm for secure mobile payments

    Security architecture for Fog-To-Cloud continuum system

    Get PDF
    Nowadays, by increasing the number of connected devices to Internet rapidly, cloud computing cannot handle the real-time processing. Therefore, fog computing was emerged for providing data processing, filtering, aggregating, storing, network, and computing closer to the users. Fog computing provides real-time processing with lower latency than cloud. However, fog computing did not come to compete with cloud, it comes to complete the cloud. Therefore, a hierarchical Fog-to-Cloud (F2C) continuum system was introduced. The F2C system brings the collaboration between distributed fogs and centralized cloud. In F2C systems, one of the main challenges is security. Traditional cloud as security provider is not suitable for the F2C system due to be a single-point-of-failure; and even the increasing number of devices at the edge of the network brings scalability issues. Furthermore, traditional cloud security cannot be applied to the fog devices due to their lower computational power than cloud. On the other hand, considering fog nodes as security providers for the edge of the network brings Quality of Service (QoS) issues due to huge fog device’s computational power consumption by security algorithms. There are some security solutions for fog computing but they are not considering the hierarchical fog to cloud characteristics that can cause a no-secure collaboration between fog and cloud. In this thesis, the security considerations, attacks, challenges, requirements, and existing solutions are deeply analyzed and reviewed. And finally, a decoupled security architecture is proposed to provide the demanded security in hierarchical and distributed fashion with less impact on the QoS.Hoy en día, al aumentar rápidamente el número de dispositivos conectados a Internet, el cloud computing no puede gestionar el procesamiento en tiempo real. Por lo tanto, la informática de niebla surgió para proporcionar procesamiento de datos, filtrado, agregación, almacenamiento, red y computación más cercana a los usuarios. La computación nebulizada proporciona procesamiento en tiempo real con menor latencia que la nube. Sin embargo, la informática de niebla no llegó a competir con la nube, sino que viene a completar la nube. Por lo tanto, se introdujo un sistema continuo jerárquico de niebla a nube (F2C). El sistema F2C aporta la colaboración entre las nieblas distribuidas y la nube centralizada. En los sistemas F2C, uno de los principales retos es la seguridad. La nube tradicional como proveedor de seguridad no es adecuada para el sistema F2C debido a que se trata de un único punto de fallo; e incluso el creciente número de dispositivos en el borde de la red trae consigo problemas de escalabilidad. Además, la seguridad tradicional de la nube no se puede aplicar a los dispositivos de niebla debido a su menor poder computacional que la nube. Por otro lado, considerar los nodos de niebla como proveedores de seguridad para el borde de la red trae problemas de Calidad de Servicio (QoS) debido al enorme consumo de energía computacional del dispositivo de niebla por parte de los algoritmos de seguridad. Existen algunas soluciones de seguridad para la informática de niebla, pero no están considerando las características de niebla a nube jerárquica que pueden causar una colaboración insegura entre niebla y nube. En esta tesis, las consideraciones de seguridad, los ataques, los desafíos, los requisitos y las soluciones existentes se analizan y revisan en profundidad. Y finalmente, se propone una arquitectura de seguridad desacoplada para proporcionar la seguridad exigida de forma jerárquica y distribuida con menor impacto en la QoS.Postprint (published version

    Location based services in wireless ad hoc networks

    Get PDF
    In this dissertation, we investigate location based services in wireless ad hoc networks from four different aspects - i) location privacy in wireless sensor networks (privacy), ii) end-to-end secure communication in randomly deployed wireless sensor networks (security), iii) quality versus latency trade-off in content retrieval under ad hoc node mobility (performance) and iv) location clustering based Sybil attack detection in vehicular ad hoc networks (trust). The first contribution of this dissertation is in addressing location privacy in wireless sensor networks. We propose a non-cooperative sensor localization algorithm showing how an external entity can stealthily invade into the location privacy of sensors in a network. We then design a location privacy preserving tracking algorithm for defending against such adversarial localization attacks. Next we investigate secure end-to-end communication in randomly deployed wireless sensor networks. Here, due to lack of control on sensors\u27 locations post deployment, pre-fixing pairwise keys between sensors is not feasible especially under larger scale random deployments. Towards this premise, we propose differentiated key pre-distribution for secure end-to-end secure communication, and show how it improves existing routing algorithms. Our next contribution is in addressing quality versus latency trade-off in content retrieval under ad hoc node mobility. We propose a two-tiered architecture for efficient content retrieval in such environment. Finally we investigate Sybil attack detection in vehicular ad hoc networks. A Sybil attacker can create and use multiple counterfeit identities risking trust of a vehicular ad hoc network, and then easily escape the location of the attack avoiding detection. We propose a location based clustering of nodes leveraging vehicle platoon dispersion for detection of Sybil attacks in vehicular ad hoc networks --Abstract, page iii

    Storage systems for mobile-cloud applications

    Get PDF
    Mobile devices have become the major computing platform in todays world. However, some apps on mobile devices still suffer from insufficient computing and energy resources. A key solution is to offload resource-demanding computing tasks from mobile devices to the cloud. This leads to a scenario where computing tasks in the same application run concurrently on both the mobile device and the cloud. This dissertation aims to ensure that the tasks in a mobile app that employs offloading can access and share files concurrently on the mobile and the cloud in a manner that is efficient, consistent, and transparent to locations. Existing distributed file systems and network file systems do not satisfy these requirements. Furthermore, current offloading platforms either do not support efficient file access for offloaded tasks or do not offload tasks with file accesses. The first part of the dissertation addresses this issue by designing and implementing an application-level file system named Overlay File System (OFS). OFS assumes a cloud surrogate is paired with each mobile device for task and storage offloading. To achieve high efficiency, OFS maintains and buffers local copies of data sets on both the surrogate and the mobile device. OFS ensures consistency and guarantees that all the reads get the latest data. To effectively reduce the network traffic and the execution delay, OFS uses a delayed-update mechanism, which combines write-invalidate and write-update policies. To guarantee location transparency, OFS creates a unified view of file data. The research tests OFS on Android OS with a real mobile application and real mobile user traces. Extensive experiments show that OFS can effectively support consistent file accesses from computation tasks, no matter where they run. In addition, OFS can effectively reduce both file access latency and network traffic incurred by file accesses. While OFS allows offloaded tasks to access the required files in a consistent and transparent manner, file accesses by offloaded tasks can be further improved. Instead of retrieving the required files from its associated mobile device, a surrogate can discover and retrieve identical or similar file(s) from the surrogates belonging to other users to meet its needs. This is based on two observations: 1) multiple users have the same or similar files, e.g., shared files or images/videos of same object; 2) the need for a certain file content in mobile apps can usually be described by context features of the content, e.g., location, objects in an image, etc.; thus, any file with the required context features can be used to satisfy the need. Since files may be retrieved from surrogates, this solution improves latency and saves wireless bandwidth and power on mobile devices. The second part of the dissertation proposes and develops a Context-Aware File Discovery Service (CAFDS) that implements the idea described above. CAFDS uses a self-organizing map and k-means clustering to classify files into file groups based on file contexts. It then uses an enhanced decision tree to locate and retrieve files based on the file contexts defined by apps. To support diverse file discovery demands from various mobile apps, CAFDS allows apps to add new file contexts and to update existing file contexts dynamically, without affecting the discovery process. To evaluate the effectiveness of CAFDS, the research has implemented a prototype on Android and Linux. The performance of CAFDS was tested against Chord, a DHT based lookup scheme, and SPOON, a P2P file sharing system. The experiments show that CAFDS provides lower end-to-end latency for file search than Chord and SPOON, while providing similar scalability to Chord
    • …
    corecore