1,609 research outputs found

    Light-Weight Accountable Privacy Preserving Protocol in Cloud Computing Based on a Third-Party Auditor

    Get PDF
    Cloud computing is emerging as the next disruptive utility paradigm [1]. It provides extensive storage capabilities and an environment for application developers through virtual machines. It is also the home of software and databases that are accessible, on-demand. Cloud computing has drastically transformed the way organizations, and individual consumers access and interact with Information Technology. Despite significant advancements in this technology, concerns about security are holding back businesses from fully adopting this promising information technology trend. Third-party auditors (TPAs) are becoming more common in cloud computing implementations. Hence, involving auditors comes with its issues such as trust and processing overhead. To achieve productive auditing, we need to (1) accomplish efficient auditing without requesting the data location or introducing processing overhead to the cloud client; (2) avoid introducing new security vulnerabilities during the auditing process. There are various security models for safeguarding the CCs (Cloud Client) data in the cloud. The TPA systematically examines the evidence of compliance with established security criteria in the connection between the CC and the Cloud Service Provider (CSP). The CSP provides the clients with cloud storage, access to a database coupled with services. Many security models have been elaborated to make the TPA more reliable so that the clients can trust the third-party auditor with their data. Our study shows that involving a TPA might come with its shortcomings, such as trust concerns, extra overhead, security, and data manipulation breaches; as well as additional processing, which leads to the conclusion that a lightweight and secure protocol is paramount to the solution. As defined in [2] privacy-preserving is making sure that the three cloud stakeholders are not involved in any malicious activities coming from insiders at the CSP level, making sure to remediate to TPA vulnerabilities and that the CC is not deceitfully affecting other clients. In our survey phase, we have put into perspective the privacy-preserving solutions as they fit the lightweight requirements in terms of processing and communication costs, ending up by choosing the most prominent ones to compare with them our simulation results. In this dissertation, we introduce a novel method that can detect a dishonest TPA: The Light-weight Accountable Privacy-Preserving (LAPP) Protocol. The lightweight characteristic has been proven simulations as the minor impact of our protocol in terms of processing and communication costs. This protocol determines the malicious behavior of the TPA. To validate our proposed protocol’s effectiveness, we have conducted simulation experiments by using the GreenCloud simulator. Based on our simulation results, we confirm that our proposed model provides better outcomes as compared to the other known contending methods

    TSKY: a dependable middleware solution for data privacy using public storage clouds

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThis dissertation aims to take advantage of the virtues offered by data storage cloud based systems on the Internet, proposing a solution that avoids security issues by combining different providers’ solutions in a vision of a cloud-of-clouds storage and computing. The solution, TSKY System (or Trusted Sky), is implemented as a middleware system, featuring a set of components designed to establish and to enhance conditions for security, privacy, reliability and availability of data, with these conditions being secured and verifiable by the end-user, independently of each provider. These components, implement cryptographic tools, including threshold and homomorphic cryptographic schemes, combined with encryption, replication, and dynamic indexing mecha-nisms. The solution allows data management and distribution functions over data kept in different storage clouds, not necessarily trusted, improving and ensuring resilience and security guarantees against Byzantine faults and at-tacks. The generic approach of the TSKY system model and its implemented services are evaluated in the context of a Trusted Email Repository System (TSKY-TMS System). The TSKY-TMS system is a prototype that uses the base TSKY middleware services to store mailboxes and email Messages in a cloud-of-clouds

    Literature Study On Cloud Based Healthcare File Protection Algorithms

    Get PDF
    There is a huge development in Computers and Cloud computing technology, the trend in recent years is to outsource information storage on Cloud-based services. The cloud provides  large storage space. Cloud-based service providers such as Dropbox, Google Drive, are providing users with infinite and low-cost storage. In this project we aim at presenting a protection method through by encrypting and decrypting the files to provide enhanced level of protection. To encrypt the file that we upload in cloud, we make use of double encryption technique. The file is been encrypted twice one followed by the other using two algorithms. The order in which the algorithms are used is that, the file is first encrypted using AES algorithm, now this file will be in the encrypted format and this encrypted file is again encrypted using RSA algorithm. The corresponding keys are been generated during the execution of the algorithm. This is done in order to increase the security level. The various parameters that we have considered here are security level, speed, data confidentiality, data integrity and cipher text size. Our project is more efficient as it satisfies all the parameters whereas the conventional methods failed to do so. The Cloud we used is Dropbox to store the content of the file which is in the encrypted format using AES and RSA algorithms and corresponding key is generated which can be used to decrypt the file. While uploading the file the double encryption technique is been implemented

    RESCUE: Evaluation of a Fragmented Secret Share System in Distributed-Cloud Architecture

    Get PDF
    Scaling big data infrastructure using multi-cloud environment has led to the demand for highly secure, resilient and reliable data sharing method. Several variants of secret sharing scheme have been proposed but there remains a gap in knowledge on the evaluation of these methods in relation to scalability, resilience and key management as volume of files generated increase and cloud outages persist. In line with these, this thesis presents an evaluation of a method that combines data fragmentation with Shamir’s secret sharing scheme known as Fragmented Secret Share System (FSSS). It applies data fragmentation using a calculated optimum fragment size and encrypts each fragment using a 256-bit AES key length before dispersal to cloudlets, the encryption key is managed using secret sharing methods as used in cryptography.Four experiments were performed to measure the scalability, resilience and reliability in key management. The first and second experiments evaluated scalability using defined fragment blocks and an optimum fragment size. These fragment types were used to break file of varied sizes into fragments, and then encrypted and dispersed to the cloud, and recovered when required. Both were used in combination of different secret sharing policies for key management. The third experiment tested file recovery during cloud failures, while the fourth experiment focused on efficient key management.The contributions of this thesis are of two ways: development of evaluation frameworks to measure scalability and resilience of data sharing methods; and the provision of information on relationships between file sizes and share policies combinations. While the first aimed at providing platform to measure scalability from the point of continuous production as file size and volume increase, and resilience as the potential to continue operation despite cloud outages; the second provides experimental frameworks on the effects of file sizes and share policies on overall system performance.The results of evaluation of FSSS with similar methods showed that the fragmentation method has less overhead costs irrespective of file sizes and the share policy combination. That the inherent challenges in secret sharing scheme can only be solved through alternative means such as combining secret sharing with other data fragmentation method. In all, the system is less of any erasure coding technique, making it difficult to detect corrupt or lost fragment during file recovery

    5G Multi-access Edge Computing: Security, Dependability, and Performance

    Full text link
    The main innovation of the Fifth Generation (5G) of mobile networks is the ability to provide novel services with new and stricter requirements. One of the technologies that enable the new 5G services is the Multi-access Edge Computing (MEC). MEC is a system composed of multiple devices with computing and storage capabilities that are deployed at the edge of the network, i.e., close to the end users. MEC reduces latency and enables contextual information and real-time awareness of the local environment. MEC also allows cloud offloading and the reduction of traffic congestion. Performance is not the only requirement that the new 5G services have. New mission-critical applications also require high security and dependability. These three aspects (security, dependability, and performance) are rarely addressed together. This survey fills this gap and presents 5G MEC by addressing all these three aspects. First, we overview the background knowledge on MEC by referring to the current standardization efforts. Second, we individually present each aspect by introducing the related taxonomy (important for the not expert on the aspect), the state of the art, and the challenges on 5G MEC. Finally, we discuss the challenges of jointly addressing the three aspects.Comment: 33 pages, 11 figures, 15 tables. This paper is under review at IEEE Communications Surveys & Tutorials. Copyright IEEE 202

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches
    corecore