298 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Public-Key Encryption, Local Pseudorandom Generators, and the Low-Degree Method

    Get PDF
    The low-degree method postulates that no efficient algorithm outperforms low-degree polynomials in certain hypothesis-testing tasks. It has been used to understand computational indistinguishability in high-dimensional statistics. We explore the use of the low-degree method in the context of cryptography. To this end, we apply it in the design and analysis of a new public-key encryption scheme whose security is based on Goldreich\u27s pseudorandom generator. The scheme is a combination of two proposals of Applebaum, Barak, and Wigderson, and inherits desirable features from both

    Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs

    Full text link
    Experimental results have shown that curriculum learning, i.e., presenting simpler examples before more complex ones, can improve the efficiency of learning. Some recent theoretical results also showed that changing the sampling distribution can help neural networks learn parities, with formal results only for large learning rates and one-step arguments. Here we show a separation result in the number of training steps with standard (bounded) learning rates on a common sample distribution: if the data distribution is a mixture of sparse and dense inputs, there exists a regime in which a 2-layer ReLU neural network trained by a curriculum noisy-GD (or SGD) algorithm that uses sparse examples first, can learn parities of sufficiently large degree, while any fully connected neural network of possibly larger width or depth trained by noisy-GD on the unordered samples cannot learn without additional steps. We also provide experimental results supporting the qualitative separation beyond the specific regime of the theoretical results.Comment: 34 pages, 8 figure

    Electron Thermal Runaway in Atmospheric Electrified Gases: a microscopic approach

    Get PDF
    Thesis elaborated from 2018 to 2023 at the Instituto de Astrofísica de Andalucía under the supervision of Alejandro Luque (Granada, Spain) and Nikolai Lehtinen (Bergen, Norway). This thesis presents a new database of atmospheric electron-molecule collision cross sections which was published separately under the DOI : With this new database and a new super-electron management algorithm which significantly enhances high-energy electron statistics at previously unresolved ratios, the thesis explores general facets of the electron thermal runaway process relevant to atmospheric discharges under various conditions of the temperature and gas composition as can be encountered in the wake and formation of discharge channels

    Decoding algorithms for surface codes

    Full text link
    Quantum technologies have the potential to solve computationally hard problems that are intractable via classical means. Unfortunately, the unstable nature of quantum information makes it prone to errors. For this reason, quantum error correction is an invaluable tool to make quantum information reliable and enable the ultimate goal of fault-tolerant quantum computing. Surface codes currently stand as the most promising candidates to build error corrected qubits given their two-dimensional architecture, a requirement of only local operations, and high tolerance to quantum noise. Decoding algorithms are an integral component of any error correction scheme, as they are tasked with producing accurate estimates of the errors that affect quantum information, so that it can subsequently be corrected. A critical aspect of decoding algorithms is their speed, since the quantum state will suffer additional errors with the passage of time. This poses a connundrum-like tradeoff, where decoding performance is improved at the expense of complexity and viceversa. In this review, a thorough discussion of state-of-the-art surface code decoding algorithms is provided. The core operation of these methods is described along with existing variants that show promise for improved results. In addition, both the decoding performance, in terms of error correction capability, and decoding complexity, are compared. A review of the existing software tools regarding surface code decoding is also provided.Comment: 54 pages, 31 figure

    Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms

    Full text link
    Kearns' statistical query (SQ) oracle (STOC'93) lends a unifying perspective for most classical machine learning algorithms. This ceases to be true in quantum learning, where many settings do not admit, neither an SQ analog nor a quantum statistical query (QSQ) analog. In this work, we take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle (TOCT'14) and establish a unified perspective bridging the statistical and parametrized learning paradigms in a novel way. We explore the problem of learning from an evaluation oracle, which provides an estimate of function values, and introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries and characterizes the query complexity for learning linear function classes. The framework is directly applicable to the QSQ setting and virtually all algorithms based on loss function optimization. Our first application is to extend prior results on the learnability of output distributions of quantum circuits and Clifford unitaries from the SQ to the (multi-copy) QSQ setting, implying exponential separations between learning stabilizer states from (multi-copy) QSQs versus from quantum samples. Our second application is to analyze some popular quantum machine learning (QML) settings. We gain an intuitive picture of the hardness of many QML tasks which goes beyond existing methods such as barren plateaus and the statistical dimension, and contains crucial setting-dependent implications. Our framework not only unifies the perspective of cost concentration with that of the statistical dimension in a unified language but exposes their connectedness and similarity.Comment: 97 Page

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Efficient Active Learning Halfspaces with Tsybakov Noise: A Non-convex Optimization Approach

    Full text link
    We study the problem of computationally and label efficient PAC active learning dd-dimensional halfspaces with Tsybakov Noise~\citep{tsybakov2004optimal} under structured unlabeled data distributions. Inspired by~\cite{diakonikolas2020learning}, we prove that any approximate first-order stationary point of a smooth nonconvex loss function yields a halfspace with a low excess error guarantee. In light of the above structural result, we design a nonconvex optimization-based algorithm with a label complexity of O~(d(1ϵ)8−6α3α−1)\tilde{O}(d (\frac{1}{\epsilon})^{\frac{8-6\alpha}{3\alpha-1}})\footnote{In the main body of this work, we use O~(⋅),Θ~(⋅)\tilde{O}(\cdot), \tilde{\Theta}(\cdot) to hide factors of the form \polylog(d, \frac{1}{\epsilon}, \frac{1}{\delta})}, under the assumption that the Tsybakov noise parameter α∈(13,1]\alpha \in (\frac13, 1], which narrows down the gap between the label complexities of the previously known efficient passive or active algorithms~\citep{diakonikolas2020polynomial,zhang2021improved} and the information-theoretic lower bound in this setting.Comment: 29 page

    On Hardness of Testing Equivalence to Sparse Polynomials Under Shifts

    Get PDF
    • …
    corecore