7 research outputs found
On Boosting Sparse Parities
Abstract While boosting has been extensively studied, considerably less attention has been devoted to the task of designing good weak learning algorithms. In this paper we consider the problem of designing weak learners that are especially adept to the boosting procedure and specifically the AdaBoost algorithm. First we describe conditions desirable for a weak learning algorithm. We then propose using sparse parity functions as weak learners, which have many of our desired properties, as weak learners in boosting. Our experimental tests show the proposed weak learners to be competitive with the most widely used ones: decision stumps and pruned decision trees
BKW Meets Fourier: New Algorithms for LPN with Sparse Parities
We consider the Learning Parity with Noise (LPN) problem with sparse secret, where the secret vector of dimension has Hamming weight at most . We are interested in algorithms with asymptotic improvement in the beyond the state of the art. Prior work in this setting presented algorithms with runtime for constant , obtaining a constant factor improvement over brute force search, which runs in time . We obtain the following results:
- We first consider the error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM \u2703] as well as techniques from Fourier analysis for -biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity , where and . The runtime and sample complexity of this algorithm are approximately the same.
- We next consider the setting, where the error is subconstant. We present a new algorithm in this setting that requires only a number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity and noise rate of and , for . To obtain the improvement in sample complexity, we create subsets of samples using the of Nisan and Wigderson [J.~Comput.~Syst.~Sci. \u2794], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single -biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis.
- Finally, we show that our first algorithm extends to the setting where the noise rate is very high , and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size with approximation factor this regime is when , and , for . For Juntas of the regime is when , and , for
On solving LPN using BKW and variants Implementation and Analysis
The Learning Parity with Noise problem (LPN) is appealing in cryptography as it is considered to remain hard in the post-quantum world. It is also a good candidate for lightweight devices due to its simplicity. In this paper we provide a comprehensive analysis of the existing LPN solving algorithms, both for the general case and for the sparse secret scenario. In practice, the LPN-based cryptographic constructions use as a reference the security parameters proposed by Levieil and Fouque. But, for these parameters, there remains a gap between the theoretical analysis and the practical complexities of the algorithms we consider. The new theoretical analysis in this paper provides tighter bounds on the complexity of LPN solving algorithms and narrows this gap between theory and practice. We show that for a sparse secret there is another algorithm that outperforms BKW and its variants. Following from our results, we further propose practical parameters for different security levels