12 research outputs found

    Indexing Information for Data Forensics

    Get PDF
    We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked lists, binary search trees, skip lists, and hash tables. Some of our constructions are based on a new reduced-randomness construction for nonadaptive combinatorial group testing

    Query Racing: Fast Completeness Certification of Query Results

    Full text link
    International audienceWe present a general and effective method to certify completeness of query results on relational tables stored in an untrusted DBMS. Our main contribution is the concept of "Query Race": we split up a general query into several single attribute queries, and exploit concurrency and speed to bind the complexity to the fastest of them. Our method supports selection queries with general composition of conjunctive and disjunctive order-based conditions on different attributes at the same time. To achieve our results, we require neither previous knowledge of queries nor specific support by the DBMS. We validate our approach with experimental results performed on a prototypical implementation

    GeomNet: geometric computing over the Internet

    No full text

    Signatures of correct computation

    Get PDF
    We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function’s public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.

    Optimal verification of operations on dynamic sets

    Get PDF
    We study the verification of set operations in the model of authenticated data structures, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present a new authenticated data structure scheme that allows any entity to publicly verify the correctness of primitive sets operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinear-map accumulators as well as on a primitive called accumulation tree, our authenticated data structure is the first to achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), and without bearing any extra asymptotic space overhead. Queries (i.e., constructing the proof) are also efficient, adding a logarithmic overhead to the complexity needed to compute the actual answer. In contrast, existing schemes entail high communication and verification costs or high storage costs as they recompute the query over authentic data or precompute answers to all possible queries. Applications of interest include efficient verification of keyword search and database queries. We base the security of our constructions on the bilinear q-strong Diffie-Hellman assumption

    Smooth Orthogonal Layouts

    No full text
    Abstract. We study the problem of creating smooth orthogonal layouts for planar graphs. While in traditional orthogonal layouts every edge is made of a sequence of axis-aligned line segments, in smooth orthogonal layouts every edge is made of axis-aligned segments and circular arcs with common tangents. Our goal is to create such layouts with low edge complexity, measured by the number of line and circular arc segments. We show that every biconnected 4-planar graph has a smooth orthogonal layout with edge complexity 3. If the input graph has a complexity-2 traditional orthogonal layout we can transform it into a smooth complexity-2 layout. Using the Kandinsky model for removing the degree restriction, we show that any planar graph has a smooth complexity-2 layout.

    Reliable Resource Searching in P2P Networks

    No full text
    We study the problem of securely searching for resources in p2p networks where a constant fraction of the peers may act maliciously. We present two novel hashing-based schemes that can be employed to reliably support resource location and content retrieval queries, limiting the ability of adversarial nodes to carry out attacks. Our schemes achieve scalability and load balancing and have small authentication overhead. In particular, for a network with n peers, resources are securely located with O(log² n) messages and content from a collection of m data items is securely retrieved with O(log n log m) messages

    Trade-Offs in Planar Polyline Drawings

    No full text
    Angular resolution, area and the number of bends are some important aesthetic criteria of a polyline drawing. Although trade-offs among these criteria have been examined over the past decades, many of these trade-offs are still not known to be optimal. In this paper we give a new technique to compute polyline drawings for planar triangulations. Our algorithm is simple and intuitive, yet implies significant improvement over the known results. We present the first smooth trade-off between the area and angular resolution for 2-bend polyline drawings of any given planar graph. Specifically, for any given n-vertex triangulation, our algorithm computes a drawing with angular resolution r/d(v) at each vertex v, and area f(n,r), for any r ∈ (0,1], where d(v) denotes the degree at v. For r  0.5, f(n,r) is less than the drawing area required by previous algorithms; f(n,r) ranges from 7.12n 2 when r ≤ 0.3 to 32.12n 2 when r = 1
    corecore