7 research outputs found

    On Boosting Sparse Parities

    Get PDF
    Abstract While boosting has been extensively studied, considerably less attention has been devoted to the task of designing good weak learning algorithms. In this paper we consider the problem of designing weak learners that are especially adept to the boosting procedure and specifically the AdaBoost algorithm. First we describe conditions desirable for a weak learning algorithm. We then propose using sparse parity functions as weak learners, which have many of our desired properties, as weak learners in boosting. Our experimental tests show the proposed weak learners to be competitive with the most widely used ones: decision stumps and pruned decision trees

    BKW Meets Fourier: New Algorithms for LPN with Sparse Parities

    Get PDF
    We consider the Learning Parity with Noise (LPN) problem with sparse secret, where the secret vector s\textbf{s} of dimension nn has Hamming weight at most kk. We are interested in algorithms with asymptotic improvement in the exponent\textit{exponent} beyond the state of the art. Prior work in this setting presented algorithms with runtime nckn^{c \cdot k} for constant c<1c < 1, obtaining a constant factor improvement over brute force search, which runs in time (nk){n \choose k}. We obtain the following results: - We first consider the constant\textit{constant} error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM \u2703] as well as techniques from Fourier analysis for pp-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity k=k(n)=nlog1+1/c(n)k = k(n) = \frac{n}{\log^{1+ 1/c}(n)}, where co(loglog(n))c \in o(\log \log(n)) and cω(1)c \in \omega(1). The runtime and sample complexity of this algorithm are approximately the same. - We next consider the low noise\textit{low noise} setting, where the error is subconstant. We present a new algorithm in this setting that requires only a polynomial\textit{polynomial} number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity k=1ηlog(n)log(f(n))k = \frac{1}{\eta} \cdot \frac{\log(n)}{\log(f(n))} and noise rate of η1/2\eta \neq 1/2 and η2=(log(n)nf(n))\eta^2 = \left(\frac{\log(n)}{n} \cdot f(n)\right), for f(n)ω(1)no(1)f(n) \in \omega(1) \cap n^{o(1)}. To obtain the improvement in sample complexity, we create subsets of samples using the design\textit{design} of Nisan and Wigderson [J.~Comput.~Syst.~Sci. \u2794], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single pp-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis. - Finally, we show that our first algorithm extends to the setting where the noise rate is very high 1/2o(1)1/2 - o(1), and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size ss with approximation factor ϵ\epsilon this regime is when logsϵω(clognloglogc)\log \frac{s}{\epsilon} \in \omega \left( \frac{c}{\log n \log \log c}\right), and logsϵn1o(1)\log \frac{s}{\epsilon} \in n^{1 - o(1)}, for cn1o(1)c \in n^{1 - o(1)}. For Juntas of kk the regime is when kω(clognloglogc)k \in \omega \left( \frac{c}{\log n \log \log c}\right), and kn1o(1)k \in n^{1 - o(1)}, for cn1o(1)c \in n^{1 - o(1)}

    On solving LPN using BKW and variants Implementation and Analysis

    Get PDF
    The Learning Parity with Noise problem (LPN) is appealing in cryptography as it is considered to remain hard in the post-quantum world. It is also a good candidate for lightweight devices due to its simplicity. In this paper we provide a comprehensive analysis of the existing LPN solving algorithms, both for the general case and for the sparse secret scenario. In practice, the LPN-based cryptographic constructions use as a reference the security parameters proposed by Levieil and Fouque. But, for these parameters, there remains a gap between the theoretical analysis and the practical complexities of the algorithms we consider. The new theoretical analysis in this paper provides tighter bounds on the complexity of LPN solving algorithms and narrows this gap between theory and practice. We show that for a sparse secret there is another algorithm that outperforms BKW and its variants. Following from our results, we further propose practical parameters for different security levels
    corecore