11 research outputs found

    A Model to Overcome Integrity Challenges of an Untrusted DSMS Server

    Get PDF
    Despite the fact that using the services of outsourced data stream servers has been welcomed extremely, still the problem of obtaining assurance about the received results from these untrusted servers in unsecure environment is one of the basic challenges. In this paper, we present a probabilistic model for auditing received results from an outsourced data stream server through unsecure communication channels. In our architecture, the server is considered as a black box and the auditing process is fulfilled by cooperation between the data stream owner and users. Our method imposes an ignorable overhead on the user and needs no change in the structure of the server. The probabilistic modeling of the system proves algorithms convergence and the experimental evaluations show very acceptable results

    Verifying Computations with Streaming Interactive Proofs

    Full text link
    When computation is outsourced, the data owner would like to be assured that the desired computation has been performed correctly by the service provider. In theory, proof systems can give the necessary assurance, but prior work is not sufficiently scalable or practical. In this paper, we develop new proof protocols for verifying computations which are streaming in nature: the verifier (data owner) needs only logarithmic space and a single pass over the input, and after observing the input follows a simple protocol with a prover (service provider) that takes logarithmic communication spread over a logarithmic number of rounds. These ensure that the computation is performed correctly: that the service provider has not made any errors or missed out some data. The guarantee is very strong: even if the service provider deliberately tries to cheat, there is only vanishingly small probability of doing so undetected, while a correct computation is always accepted. We first observe that some theoretical results can be modified to work with streaming verifiers, showing that there are efficient protocols for problems in the complexity classes NP and NC. Our main results then seek to bridge the gap between theory and practice by developing usable protocols for a variety of problems of central importance in streaming and database processing. All these problems require linear space in the traditional streaming model, and therefore our protocols demonstrate that adding a prover can exponentially reduce the effort needed by the verifier. Our experimental results show that our protocols are practical and scalable.Comment: VLDB201

    Verifying computations with streaming interactive proofs

    Full text link

    Authentication of moving kNN queries

    Full text link

    Small Synopses for Group-By Query Verification on Outsourced Data Streams

    No full text
    Due to the overwhelming flow of information in many data stream applications, data outsourcing is a natural and effective paradigm for individual businesses to address the issue of scale. In the standard data outsourcing model, the data owner outsources streaming data to one or more third-party servers, which answer queries posed by a potentially large number of clients on the data owner's behalf. Data outsourcing intrinsically raises issues of trust, making outsourced query assurance on data streams a problem with important practical implications. Existing solutions proposed in this model all build upon cryptographic primitives such as signatures and collision-resistant hash functions, which only work for certain types of queries, for example, simple selection/aggregation queries. In this article, we consider another common type of queries, namely, "GROUP BY, SUM" queries, which previous techniques fail to support. Our new solutions are not based on cryptographic primitives, but instead use algebraic and probabilistic techniques to compute a small synopsis on the true query result, which is then communicated to the client so as to verify the correctness of the query result returned by the server. The synopsis uses a constant amount of space irrespective of the result size, has an extremely small probability of failure, and can be maintained using no extra space when the query result changes as elements stream by. We then generalize our synopsis to allow some tolerance on the number of erroneous groups, in order to support semantic load shedding on the server. When the number of erroneous groups is indeed tolerable, the synopsis can be strengthened so that we can locate and even correct these errors. Finally, we implement our techniques and perform an empirical evaluation using live network traffic
    corecore