30 research outputs found

    Integer Polynomial Recovery from Outputs and its Application to Cryptanalysis of a Protocol for Secure Sorting

    Get PDF
    {We investigate the problem of recovering integer inputs (up to an affine scaling) when given only the integer monotonic polynomial outputs. Given nn integer outputs of a degree-dd integer monotonic polynomial whose coefficients and inputs are integers within known bounds and n≫dn \gg d, we give an algorithm to recover the polynomial and the integer inputs (up to an affine scaling). A heuristic expected time complexity analysis of our method shows that it is exponential in the size of the degree of the polynomial but polynomial in the size of the polynomial coefficients. We conduct experiments with real-world data as well as randomly chosen parameters and demonstrate the effectiveness of our algorithm over a wide range of parameters. Using only the polynomial evaluations at specific integer points, the apparent hardness of recovering the input data served as the basis of security of a recent protocol proposed by Kesarwani et al. for secure kk-nearest neighbour computation on encrypted data that involved secure sorting. The protocol uses the outputs of randomly chosen monotonic integer polynomial to hide its inputs except to only reveal the ordering of input data. Using our integer polynomial recovery algorithm, we show that we can recover the polynomial and the inputs within a few seconds, thereby demonstrating an attack on the protocol of Kesarwani et al

    SparCML: High-Performance Sparse Communication for Machine Learning

    Full text link
    Applying machine learning techniques to the quickly growing data in science and industry requires highly-scalable algorithms. Large datasets are most commonly processed "data parallel" distributed across many nodes. Each node's contribution to the overall gradient is summed using a global allreduce. This allreduce is the single communication and thus scalability bottleneck for most machine learning workloads. We observe that frequently, many gradient values are (close to) zero, leading to sparse of sparsifyable communications. To exploit this insight, we analyze, design, and implement a set of communication-efficient protocols for sparse input data, in conjunction with efficient machine learning algorithms which can leverage these primitives. Our communication protocols generalize standard collective operations, by allowing processes to contribute arbitrary sparse input data vectors. Our generic communication library, SparCML, extends MPI to support additional features, such as non-blocking (asynchronous) operations and low-precision data representations. As such, SparCML and its techniques will form the basis of future highly-scalable machine learning frameworks

    A relational framework for inconsistency-aware query answering

    Get PDF
    We introduce a novel framework for encoding inconsistency into relational tuples and tackling query answering for union of con-junctive queries (UCQs) with respect to a set of denial constraints (DCs). We define a notion of inconsistent tuple with respect to a set of DCs and define four measures of inconsistency degree of an answer tuple of a query. Two of these measures revolve around the minimal number of inconsistent tuples necessary to compute the answer tuples of a UCQ, whereas the other two rely on the maximum number of inconsistent tuples under set-and bag-semantics, respectively. In order to compute these measures of inconsistency degree, we leverage two models of provenance semiring, namely why-provenance and provenance polynomials, which can be computed in polynomial time in the size of the relational instances for UCQs. Hence, these measures of inconsistency degree are also computable in polynomial time in data complexity. We also investigate top-k and bounded query answering by ranking the answer tuples by their inconsistency degrees. We explore both a full materialized approach and a semi-materialized approach for the computation of top-k and bounded query results

    Game theory for cooperation in multi-access edge computing

    Get PDF
    Cooperative strategies amongst network players can improve network performance and spectrum utilization in future networking environments. Game Theory is very suitable for these emerging scenarios, since it models high-complex interactions among distributed decision makers. It also finds the more convenient management policies for the diverse players (e.g., content providers, cloud providers, edge providers, brokers, network providers, or users). These management policies optimize the performance of the overall network infrastructure with a fair utilization of their resources. This chapter discusses relevant theoretical models that enable cooperation amongst the players in distinct ways through, namely, pricing or reputation. In addition, the authors highlight open problems, such as the lack of proper models for dynamic and incomplete information scenarios. These upcoming scenarios are associated to computing and storage at the network edge, as well as, the deployment of large-scale IoT systems. The chapter finalizes by discussing a business model for future networks.info:eu-repo/semantics/acceptedVersio
    corecore