40,122 research outputs found

    Learning with Errors in the Exponent

    Get PDF
    We initiate the study of a novel class of group-theoretic intractability problems. Inspired by the theory of learning in presence of errors [Regev, STOC\u2705] we ask if noise in the exponent amplifies intractability. We put forth the notion of Learning with Errors in the Exponent (LWEE) and rather surprisingly show that various attractive properties known to exclusively hold for lattices carry over. Most notably are worst-case hardness and post-quantum resistance. In fact, LWEE\u27s duality is due to the reducibility to two seemingly unrelated assumptions: learning with errors and the representation problem [Brands, Crypto\u2793] in finite groups. For suitable parameter choices LWEE superposes properties from each individual intractability problem. The argument holds in the classical and quantum model of computation. We give the very first construction of a semantically secure public-key encryption system in the standard model. The heart of our construction is an ``error recovery\u27\u27 technique inspired by [Joye-Libert, Eurocrypt\u2713] to handle critical propagations of noise terms in the exponent

    TIPE-TIPE KESALAHAN SISWA DALAM MENYELESAIKAN SOAL MATEMATIKA PADA ATURAN EKSPONEN DAN SCAFFOLDINGNYA: STUDI KASUS DI SMKN 11 MALANG

    Get PDF
    Types of Students’ Errors on Completing Math Problems in the Exponent Rules and its Scaffolding: Case Study in SMKN 11 Malang. This research aims to analyze errors made by students on applying exponent rules and to describe the form of scaffolding to overcome these errors. The method was a descriptive method with a qualitative approach. Furthermore, a test and an interview were used to collect data from students at class X Nurse 1 in SMKN 11 Malang. Data were analyzed by taking notes of answers, identifying factors that become thoughts, classifying student difficulties, and drawing conclusions. The results showed that the types of student errors on applying exponent rules were basic errors, missing information, and partial insight. Meanwhile, the forms of scaffolding given to students were environmental provisions, reviewing and restructuring, and developing contextual thinking. The research concluded that the learning process was very important for student learning success

    Statistical Mechanics of Soft Margin Classifiers

    Full text link
    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patterns to be learned. The optimal learning curves of the SMCs, which give the minimal generalization error, are obtained by tuning the coefficient controlling the trade-off between the error and the regularization terms in the cost function. If the task is realizable by the SMC, the optimal performance is better than that of a hard margin Support Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln2/2+o(1))n/loglogn2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Learning curves for Soft Margin Classifiers

    Full text link
    Typical learning curves for Soft Margin Classifiers (SMCs) learning both realizable and unrealizable tasks are determined using the tools of Statistical Mechanics. We derive the analytical behaviour of the learning curves in the regimes of small and large training sets. The generalization errors present different decay laws towards the asymptotic values as a function of the training set size, depending on general geometrical characteristics of the rule to be learned. Optimal generalization curves are deduced through a fine tuning of the hyperparameter controlling the trade-off between the error and the regularization terms in the cost function. Even if the task is realizable, the optimal performance of the SMC is better than that of a hard margin Support Vector Machine (SVM) learning the same rule, and is very close to that of the Bayesian classifier.Comment: 26 pages, 10 figure

    Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates

    Get PDF
    The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp. tree) model is the hardest (resp. easiest) to learn using the proposed algorithm in terms of error rates for structure learning.Comment: Accepted to the Journal of Machine Learning Research (Feb 2011

    Multifractal analysis of perceptron learning with errors

    Full text link
    Random input patterns induce a partition of the coupling space of a perceptron into cells labeled by their output sequences. Learning some data with a maximal error rate leads to clusters of neighboring cells. By analyzing the internal structure of these clusters with the formalism of multifractals, we can handle different storage and generalization tasks for lazy students and absent-minded teachers within one unified approach. The results also allow some conclusions on the spatial distribution of cells.Comment: 11 pages, RevTex, 3 eps figures, version to be published in Phys. Rev. E 01Jan9
    corecore