167 research outputs found

    Pseudorandom Self-Reductions for NP-Complete Problems

    Get PDF
    A language L is random-self-reducible if deciding membership in L can be reduced (in polynomial time) to deciding membership in L for uniformly random instances. It is known that several "number theoretic" languages (such as computing the permanent of a matrix) admit random self-reductions. Feigenbaum and Fortnow showed that NP-complete languages are not non-adaptively random-self-reducible unless the polynomial-time hierarchy collapses, giving suggestive evidence that NP may not admit random self-reductions. Hirahara and Santhanam introduced a weakening of random self-reductions that they called pseudorandom self-reductions, in which a language L is reduced to a distribution that is computationally indistinguishable from the uniform distribution. They then showed that the Minimum Circuit Size Problem (MCSP) admits a non-adaptive pseudorandom self-reduction, and suggested that this gave further evidence that distinguished MCSP from standard NP-Complete problems. We show that, in fact, the Clique problem admits a non-adaptive pseudorandom self-reduction, assuming the planted clique conjecture. More generally we show the following. Call a property of graphs ? hereditary if G ? ? implies H ? ? for every induced subgraph of G. We show that for any infinite hereditary property ?, the problem of finding a maximum induced subgraph H ? ? of a given graph G admits a non-adaptive pseudorandom self-reduction

    Privacy-Preserving OLAP-based monitoring of data streams: The PP-OMDS approach

    Get PDF
    In this paper, we propose PP-OMDS (Privacy-Preserving OLAP-based Monitoring of Data Streams), an innovative framework for supporting the OLAP-based monitoring of data streams, which is relevant for a plethora of application scenarios (e.g., security, emergency management, and so forth), in a privacy-preserving manner. The paper describes motivations, principles and achievements of the PP-OMDS framework, along with technological advancements and innovations. We also incorporate a detailed comparative analysis with competitive frameworks, along with a trade-off analysis

    Probabilistic Invariant Learning with Randomized Linear Classifiers

    Full text link
    Designing models that are both expressive and preserve known invariances of tasks is an increasingly hard problem. Existing solutions tradeoff invariance for computational or memory resources. In this work, we show how to leverage randomness and design models that are both expressive and invariant but use less resources. Inspired by randomized algorithms, our key insight is that accepting probabilistic notions of universal approximation and invariance can reduce our resource requirements. More specifically, we propose a class of binary classification models called Randomized Linear Classifiers (RLCs). We give parameter and sample size conditions in which RLCs can, with high probability, approximate any (smooth) function while preserving invariance to compact group transformations. Leveraging this result, we design three RLCs that are provably probabilistic invariant for classification tasks over sets, graphs, and spherical data. We show how these models can achieve probabilistic invariance and universality using less resources than (deterministic) neural networks and their invariant counterparts. Finally, we empirically demonstrate the benefits of this new class of models on invariant tasks where deterministic invariant neural networks are known to struggle
    • …
    corecore