33 research outputs found

    Handling Confidential Data on the Untrusted Cloud: An Agent-based Approach

    Get PDF
    Cloud computing allows shared computer and storage facilities to be used by a multitude of clients. While cloud management is centralized, the information resides in the cloud and information sharing can be implemented via off-the-shelf techniques for multiuser databases. Users, however, are very diffident for not having full control over their sensitive data. Untrusted database-as-a-server techniques are neither readily extendable to the cloud environment nor easily understandable by non-technical users. To solve this problem, we present an approach where agents share reserved data in a secure manner by the use of simple grant-and-revoke permissions on shared data.Comment: 7 pages, 9 figures, Cloud Computing 201

    Data security issues in cloud scenarios

    Get PDF
    The amount of data created, stored, and processed has enormously increased in the last years. Today, millions of devices are connected to the Internet and generate a huge amount of (personal) data that need to be stored and processed using scalable, efficient, and reliable computing infrastructures. Cloud computing technology can be used to respond to these needs. Although cloud computing brings many benefits to users and companies, security concerns about the cloud still represent the major impediment for its wide adoption. We briefly survey the main challenges related to the storage and processing of data in the cloud. In particular, we focus on the problem of protecting data in storage, supporting fine-grained access, selectively sharing data, protecting query privacy, and verifying the integrity of computations

    Business intelligence meets big data : an overview on security and privacy

    Get PDF
    Today big data are the target of many research activities focusing on big data management and analysis, definition of zero latency approaches to data analytics, and protection of big data security and privacy. In particular, security and privacy are two important, while contrasting, requirements. Big data security usually refers to the use of big data to implement solutions increasing security, reliability, and safety of a distributed system. Big data privacy, instead, focuses on the protection of big data from unauthorized use and unwanted inference. In this paper, we start from the manifesto on Business Intelligence Meets Big Data [8] and the notions of full data and zero-latency analysis to discuss new challenges in the context of big data security and privacy

    Minimizing disclosure of private information in credential-based interactions : a graph-based approach

    Get PDF
    We address the problem of enabling clients to regulate disclosure of their credentials and properties when interacting with servers in open scenarios. We provide a means for clients to specify the sensitivity of information in their portfolio at a fine-grain level and to determine the credentials and properties to disclose to satisfy a server request while minimizing the sensitivity of the information disclosed. Exploiting a graph modeling of the problem, we develop a heuristic approach for determining a disclosure minimizing released information, that offers execution times compatible with the requirements of interactive access to Web resources

    Data protection in Cloud scenarios

    Get PDF
    We present a brief overview of the main challenges related to data protection that need to be addressed when data are stored, processed, or managed in the cloud. We also discuss emerging approaches and directions to address such challenges

    iPrivacy: a Distributed Approach to Privacy on the Cloud

    Full text link
    The increasing adoption of Cloud storage poses a number of privacy issues. Users wish to preserve full control over their sensitive data and cannot accept that it to be accessible by the remote storage provider. Previous research was made on techniques to protect data stored on untrusted servers; however we argue that the cloud architecture presents a number of open issues. To handle them, we present an approach where confidential data is stored in a highly distributed database, partly located on the cloud and partly on the clients. Data is shared in a secure manner using a simple grant-and-revoke permission of shared data and we have developed a system test implementation, using an in-memory RDBMS with row-level data encryption for fine-grained data access controlComment: 13 pages, International Journal on Advances in Security 2011 vol.4 no 3 & 4. arXiv admin note: substantial text overlap with arXiv:1012.0759, arXiv:1109.355

    Privacy Preservation by Disassociation

    Full text link
    In this work, we focus on protection against identity disclosure in the publication of sparse multidimensional data. Existing multidimensional anonymization techniquesa) protect the privacy of users either by altering the set of quasi-identifiers of the original data (e.g., by generalization or suppression) or by adding noise (e.g., using differential privacy) and/or (b) assume a clear distinction between sensitive and non-sensitive information and sever the possible linkage. In many real world applications the above techniques are not applicable. For instance, consider web search query logs. Suppressing or generalizing anonymization methods would remove the most valuable information in the dataset: the original query terms. Additionally, web search query logs contain millions of query terms which cannot be categorized as sensitive or non-sensitive since a term may be sensitive for a user and non-sensitive for another. Motivated by this observation, we propose an anonymization technique termed disassociation that preserves the original terms but hides the fact that two or more different terms appear in the same record. We protect the users' privacy by disassociating record terms that participate in identifying combinations. This way the adversary cannot associate with high probability a record with a rare combination of terms. To the best of our knowledge, our proposal is the first to employ such a technique to provide protection against identity disclosure. We propose an anonymization algorithm based on our approach and evaluate its performance on real and synthetic datasets, comparing it against other state-of-the-art methods based on generalization and differential privacy.Comment: VLDB201
    corecore