2,393 research outputs found

    An Algorithm for Constructing a Smallest Register with Non-Linear Update Generating a Given Binary Sequence

    Full text link
    Registers with Non-Linear Update (RNLUs) are a generalization of Non-Linear Feedback Shift Registers (NLFSRs) in which both, feedback and feedforward, connections are allowed and no chain connection between the stages is required. In this paper, a new algorithm for constructing RNLUs generating a given binary sequence is presented. Expected size of RNLUs constructed by the presented algorithm is proved to be O(n/log(n/p)), where n is the sequence length and p is the degree of parallelization. This is asymptotically smaller than the expected size of RNLUs constructed by previous algorithms and the expected size of LFSRs and NLFSRs generating the same sequence. The presented algorithm can potentially be useful for many applications, including testing, wireless communications, and cryptography

    SPRIGHT: A Fast and Robust Framework for Sparse Walsh-Hadamard Transform

    Full text link
    We consider the problem of computing the Walsh-Hadamard Transform (WHT) of some NN-length input vector in the presence of noise, where the NN-point Walsh spectrum is KK-sparse with K=O(Nδ)K = {O}(N^{\delta}) scaling sub-linearly in the input dimension NN for some 0<δ<10<\delta<1. Over the past decade, there has been a resurgence in research related to the computation of Discrete Fourier Transform (DFT) for some length-NN input signal that has a KK-sparse Fourier spectrum. In particular, through a sparse-graph code design, our earlier work on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes the KK-sparse DFT in time O(KlogK){O}(K\log K) by taking O(K){O}(K) noiseless samples. Inspired by the coding-theoretic design framework, Scheibler et al. proposed the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly computes the KK-sparse WHT in the absence of noise using O(KlogN){O}(K\log N) samples in time O(Klog2N){O}(K\log^2 N). However, the SparseFHT algorithm explicitly exploits the noiseless nature of the problem, and is not equipped to deal with scenarios where the observations are corrupted by noise. Therefore, a question of critical interest is whether this coding-theoretic framework can be made robust to noise. Further, if the answer is yes, what is the extra price that needs to be paid for being robust to noise? In this paper, we show, quite interestingly, that there is {\it no extra price} that needs to be paid for being robust to noise other than a constant factor. In other words, we can maintain the same sample complexity O(KlogN){O}(K\log N) and the computational complexity O(Klog2N){O}(K\log^2 N) as those of the noiseless case, using our SParse Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm.Comment: Part of our results was reported in ISIT 2014, titled "The SPRIGHT algorithm for robust sparse Hadamard Transforms.

    Approximate Gradient Coding via Sparse Random Graphs

    Full text link
    Distributed algorithms are often beset by the straggler effect, where the slowest compute nodes in the system dictate the overall running time. Coding-theoretic techniques have been recently proposed to mitigate stragglers via algorithmic redundancy. Prior work in coded computation and gradient coding has mainly focused on exact recovery of the desired output. However, slightly inexact solutions can be acceptable in applications that are robust to noise, such as model training via gradient-based algorithms. In this work, we present computationally simple gradient codes based on sparse graphs that guarantee fast and approximately accurate distributed computation. We demonstrate that sacrificing a small amount of accuracy can significantly increase algorithmic robustness to stragglers

    Mathematical foundations of modern cryptography: computational complexity perspective

    Full text link
    Theoretical computer science has found fertile ground in many areas of mathematics. The approach has been to consider classical problems through the prism of computational complexity, where the number of basic computational steps taken to solve a problem is the crucial qualitative parameter. This new approach has led to a sequence of advances, in setting and solving new mathematical challenges as well as in harnessing discrete mathematics to the task of solving real-world problems. In this talk, I will survey the development of modern cryptography -- the mathematics behind secret communications and protocols -- in this light. I will describe the complexity theoretic foundations underlying the cryptographic tasks of encryption, pseudo-randomness number generators and functions, zero knowledge interactive proofs, and multi-party secure protocols. I will attempt to highlight the paradigms and proof techniques which unify these foundations, and which have made their way into the mainstream of complexity theory

    On the Efficiency of Solving Boolean Polynomial Systems with the Characteristic Set Method

    Full text link
    An improved characteristic set algorithm for solving Boolean polynomial systems is proposed. This algorithm is based on the idea of converting all the polynomials into monic ones by zero decomposition, and using additions to obtain pseudo-remainders. Three important techniques are applied in the algorithm. The first one is eliminating variables by new generated linear polynomials. The second one is optimizing the strategy of choosing polynomial for zero decomposition. The third one is to compute add-remainders to eliminate the leading variable of new generated monic polynomials. By analyzing the depth of the zero decomposition tree, we present some complexity bounds of this algorithm, which are lower than the complexity bounds of previous characteristic set algorithms. Extensive experimental results show that this new algorithm is more efficient than previous characteristic set algorithms for solving Boolean polynomial systems

    Pseudorandomness for Multilinear Read-Once Algebraic Branching Programs, in any Order

    Full text link
    We give deterministic black-box polynomial identity testing algorithms for multilinear read-once oblivious algebraic branching programs (ROABPs), in n^(lg^2 n) time. Further, our algorithm is oblivious to the order of the variables. This is the first sub-exponential time algorithm for this model. Furthermore, our result has no known analogue in the model of read-once oblivious boolean branching programs with unknown order, as despite recent work there is no known pseudorandom generator for this model with sub-polynomial seed-length (for unbounded-width branching programs). This result extends and generalizes the result of Forbes and Shpilka that obtained a n^(lg n)-time algorithm when given the order. We also extend and strengthen the work of Agrawal, Saha and Saxena that gave a black-box algorithm running in time exp((lg n)^d) for set-multilinear formulas of depth d. We note that the model of multilinear ROABPs contains the model of set-multilinear algebraic branching programs, which itself contains the model of set-multilinear formulas of arbitrary depth. We obtain our results by recasting, and improving upon, the ideas of Agrawal, Saha and Saxena. We phrase the ideas in terms of rank condensers and Wronskians, and show that our results improve upon the classical multivariate Wronskian, which may be of independent interest. In addition, we give the first n^(lglg n) black-box polynomial identity testing algorithm for the so called model of diagonal circuits. This model, introduced by Saxena has recently found applications in the work of Mulmuley, as well as in the work of Gupta, Kamath, Kayal, Saptharishi. Previously work had given n^(lg n)-time algorithms for this class. More generally, our result holds for any model computing polynomials whose partial derivatives (of all orders) span a low dimensional linear space.Comment: 38 page

    Short seed extractors against quantum storage

    Full text link
    Some, but not all, extractors resist adversaries with limited quantum storage. In this paper we show that Trevisan's extractor has this property, thereby showing an extractor against quantum storage with logarithmic seed length

    Monotone Boolean Functions, Feasibility/Infeasibility, LP-type problems and MaxCon

    Full text link
    This paper outlines connections between Monotone Boolean Functions, LP-Type problems and the Maximum Consensus Problem. The latter refers to a particular type of robust fitting characterisation, popular in Computer Vision (MaxCon). Indeed, this is our main motivation but we believe the results of the study of these connections are more widely applicable to LP-type problems (at least 'thresholded versions', as we describe), and perhaps even more widely. We illustrate, with examples from Computer Vision, how the resulting perspectives suggest new algorithms. Indeed, we focus, in the experimental part, on how the Influence (a property of Boolean Functions that takes on a special form if the function is Monotone) can guide a search for the MaxCon solution.Comment: Parts under conference review, work in progress. Keywords: Monotone Boolean Functions, Consensus Maximisation, LP-Type Problem, Computer Vision, Robust Fitting, Matroid, Simplicial Complex, Independence System

    Hyperparameter Optimization in Neural Networks via Structured Sparse Recovery

    Full text link
    In this paper, we study two important problems in the automated design of neural networks -- Hyper-parameter Optimization (HPO), and Neural Architecture Search (NAS) -- through the lens of sparse recovery methods. In the first part of this paper, we establish a novel connection between HPO and structured sparse recovery. In particular, we show that a special encoding of the hyperparameter space enables a natural group-sparse recovery formulation, which when coupled with HyperBand (a multi-armed bandit strategy), leads to improvement over existing hyperparameter optimization methods. Experimental results on image datasets such as CIFAR-10 confirm the benefits of our approach. In the second part of this paper, we establish a connection between NAS and structured sparse recovery. Building upon ``one-shot'' approaches in NAS, we propose a novel algorithm that we call CoNAS by merging ideas from one-shot approaches with a techniques for learning low-degree sparse Boolean polynomials. We provide theoretical analysis on the number of validation error measurements. Finally, we validate our approach on several datasets and discover novel architectures hitherto unreported, achieving competitive (or better) results in both performance and search time compared to the existing NAS approaches.Comment: arXiv admin note: text overlap with arXiv:1906.0286

    sAVSS: Scalable Asynchronous Verifiable Secret Sharing in BFT Protocols

    Full text link
    This paper introduces a new way to incorporate verifiable secret sharing (VSS) schemes into Byzantine Fault Tolerance (BFT) protocols. This technique extends the threshold guarantee of classical Byzantine Fault Tolerant algorithms to include privacy as well. This provides applications with a powerful primitive: a threshold trusted third party, which simplifies many difficult problems such as a fair exchange. In order to incorporate VSS into BFT, we introduced sAVSS, a framework that transforms any VSS scheme into an asynchronous VSS scheme with constant overhead. By incorporating Kate et al.'s scheme into our framework, we obtain an asynchronous VSS that has constant overhead on each replica -- the first of its kind. We show that a key-value store built using BFT replication and sAVSS supports writing secret-shared values with about a 30% - 50% throughput overhead with less than 35 millisecond request latencies
    corecore