122 research outputs found

    Authentication of Moving Top-k Spatial Keyword Queries

    Get PDF
    published_or_final_versio

    DSAC: An Approach to Ensure Integrity of Outsourced Databases using Signature Aggregation and Chaining

    Get PDF
    Database outsourcing is an important emerging trend which involves data owners delegating their data management needs to an external service provider. In this model, a service provider hosts clients\u27 databases and offers mechanisms to create, store, update and access (query) outsourced databases. Since a service provider is almost never fully trusted, security and privacy of outsourced data are important concerns. A core security requirement is the integrity and authenticity of outsourced databases. Whenever someone queries a hosted database, the results must be demonstrably authentic (with respect to the actual data owner) to ensure that the data has not been tampered with. Furthermore, the results must carry a proof of completeness which will allow the querier to verify that the server has not omitted any valid tuples that match the query predicate. Notable prior research (\cite{DpGmMcSs00, McNgDpGmKwSs02, PanTan04}) focused on so-called \textit{Authenticated Data Structures}. Another prior approach involved the use of special digital signature schemes. In this paper, we extend the state-of-the-art to provide both authenticity and completeness guarantees of query replies. Our work also analyzes the new approach for various base query types and compares the new approach with Authenticated Data Structures.\footnote{We also point out some possible security flaws in the approach suggested in the recent work of \cite{PanTan04}.

    Verifiable Outsourced Database Model: A Game-Theoretic Approach

    Get PDF
    In the verifiable database (VDB) model, a computationally weak client (database owner) delegates his database management to a database service provider on the cloud, which is considered untrusted third party, while users can query the data and verify the integrity of query results. Since the process can be computationally costly and has a limited support for sophisticated query types such as aggregated queries, we propose in this research a framework that helps bridge the gap between security and practicality. The proposed framework remodels the verifiable database problem using Stackelberg security game. In the new model, the database owner creates and uploads to the database service provider the database and its authentication structure (AS). Next, the game is played between the defender (verifier), who is a trusted party to the database owner and runs scheduled randomized verifications using Stackelberg mixed strategy, and the database service provider. The idea is to randomize the verification schedule in an optimized way that grants the optimal payoff for the verifier while making it extremely hard for the database service provider or any attacker to figure out which part of the database is being verified next. We have implemented and compared the proposed model performance with a uniform randomization model. Simulation results show that the proposed model outperforms the uniform randomization model. Furthermore, we have evaluated the efficiency of the proposed model against different cost metrics

    Query Authentication and processing on outsourced databases

    Get PDF
    Master'sMASTER OF SCIENC

    Scalable Verification for Outsourced Dynamic Databases

    Get PDF
    Query answers from servers operated by third parties need to be verified, as the third parties may not be trusted or their servers may be compromised. Most of the existing authentication methods construct validity proofs based on the Merkle hash tree (MHT). The MHT, however, imposes severe concurrency constraints that slow down data updates. We introduce a protocol, built upon signature aggregation, for checking the authenticity, completeness and freshness of query answers. The protocol offers the important property of allowing new data to be disseminated immediately, while ensuring that outdated values beyond a pre-set age can be detected. We also propose an efficient verification technique for ad-hoc equijoins, for which no practical solution existed. In addition, for servers that need to process heavy query workloads, we introduce a mechanism that significantly reduces the proof construction time by caching just a small number of strategically chosen aggregate signatures. The efficiency and efficacy of our proposed mechanisms are confirmed through extensive experiments. 1

    Integrity Coded Databases: Ensuring Correctness and Freshness of Outsourced Databases

    Get PDF
    In recent years, cloud storage has become an inexpensive and convenient option for individuals and businesses to store and retrieve information. The cloud releases the data owner from the financial burden of hiring professionals to create, update and maintain local databases. The advancements in the field of networking and the growing need for computing resources for various applications have made cloud computing more demanding. Its positive aspects make the cloud an attractive option for data storage, but this service comes with a cost that it requires the data owner to relinquish control of their information to the cloud service provider. So, there remains the possibility for malicious insider attacks on the data that may involve addition, omission, or manipulation of data. This paper presents a novel Integrity Coded Database (ICDB) approach for ensuring data correctness and freshness in the cloud. Various options for verifying the integrity of queried data in different granularities are provided, such as the coarse-grained integrity protection for the entire returned dataset or a more fine-grained integrity protection down to each tuple or even each attribute. ICDB allows data owners to insert integrity codes into a database, outsource the database to the cloud, run queries against the cloud database server, and verify that the queried information from the cloud is both correct and fresh. An ICDB prototype has been developed in order to benchmark several ICDB schemes to evaluate their performance
    corecore