1,468 research outputs found

    Trace Reconstruction: Generalized and Parameterized

    Get PDF
    In the beautifully simple-to-state problem of trace reconstruction, the goal is to reconstruct an unknown binary string x given random "traces" of x where each trace is generated by deleting each coordinate of x independently with probability p<1. The problem is well studied both when the unknown string is arbitrary and when it is chosen uniformly at random. For both settings, there is still an exponential gap between upper and lower sample complexity bounds and our understanding of the problem is still surprisingly limited. In this paper, we consider natural parameterizations and generalizations of this problem in an effort to attain a deeper and more comprehensive understanding. Perhaps our most surprising results are: 1) We prove that exp(O(n^(1/4) sqrt{log n})) traces suffice for reconstructing arbitrary matrices. In the matrix version of the problem, each row and column of an unknown sqrt{n} x sqrt{n} matrix is deleted independently with probability p. Our results contrasts with the best known results for sequence reconstruction where the best known upper bound is exp(O(n^(1/3))). 2) An optimal result for random matrix reconstruction: we show that Theta(log n) traces are necessary and sufficient. This is in contrast to the problem for random sequences where there is a super-logarithmic lower bound and the best known upper bound is exp({O}(log^(1/3) n)). 3) We show that exp(O(k^(1/3) log^(2/3) n)) traces suffice to reconstruct k-sparse strings, providing an improvement over the best known sequence reconstruction results when k = o(n/log^2 n). 4) We show that poly(n) traces suffice if x is k-sparse and we additionally have a "separation" promise, specifically that the indices of 1\u27s in x all differ by Omega(k log n)

    PAC learning with generalized samples and an application to stochastic geometry

    Get PDF
    Includes bibliographical references (p. 16-17).Caption title.Research supported by the National Science Foundation. ECS-8552419 Research supported by the U.S. Army Research Office. DAAL01-86-K-0171 Research supported by the Dept. of the Navy under an Air Force Contract. F19628-90-C-0002S.R. Kulkarni ... [et al.]

    A Formal Framework for Speedup Learning from Problems and Solutions

    Full text link
    Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, Explanation-Based Learning (EBL), and Probably Approximately Correct (PAC) Learning.Comment: See http://www.jair.org/ for any accompanying file

    Robustifying Learnability

    Get PDF
    In recent years, the learnability of rational expectations equilibria (REE) and determinacy of economic structures have rightfully joined the usual performance criteria among the sought after goals of policy design. And while some contributions to the literature (for example Bullard and Mitra (2001) and Evans and Honkapohja (2002)) have made significant headway in establishing certain features of monetary policy rules that facilitate learning, a comprehensive treatment of policy design for learnability has yet to surface, especially for cases in which agents have potentially misspecified their learning models. This paper provides such a treatment. We argue that since even among professional economists a generally acceptable workhorse model of the economy has not been agreed upon, it is unreasonable to expect private agents to have collective rational expectations. We assume instead that agents have an approximate understanding of the workings of the economy and that their task of learning true reduced forms of the economy is subject to potentially destabilizing errors. We then ask: can a central bank set policy that accounts for learning errors but also succeeds in bounding them in a way that allows eventual learnability of the model, given policy. For different parameterizations of a given policy rule applied to a New Keynesian model, we use structured singular value analysis (from robust control) to find the largest ranges of misspecifications that can be tolerated in a learning model without compromising convergence to an REE. A parallel set of experiments seeks to determine the optimal stance (strong inflation as opposed to strong output stabilization) that allows for the greatest scope of errors in learning without leading to expectational instabilty in cases when the central bank designs both optimal and robust policy rules with commitment. We compare the features of all the rules contemplated in the paper with those that maximize economic performance in the true model, and we measure the performance cost of maximizing learnability under the various conditions mentioned here.monetary policy, learning, E-stability, model uncertainty, robustness

    Robustifying learnability

    Get PDF
    In recent years, the learnability of rational expectations equilibria (REE) and determinacy of economic structures have rightfully joined the usual performance criteria among the sought-after goals of policy design. Some contributions to the literature, including Bullard and Mitra (2001) and Evans and Honkapohja (2002), have made significant headway in establishing certain features of monetary policy rules that facilitate learning. However a treatment of policy design for learnability in worlds where agents have potentially misspecified their learning models has yet to surface. This paper provides such a treatment. We begin with the notion that because the profession has yet to settle on a consensus model of the economy, it is unreasonable to expect private agents to have collective rational expectations. We assume that agents have only an approximate understanding of the workings of the economy and that their learning the reduced forms of the economy is subject to potentially destabilizing perturbations. The issue is then whether a central bank can design policy to account for perturbations and still assure the learnability of the model. Our test case is the standard New Keynesian business cycle model. For different parameterizations of a given policy rule, we use structured singular value analysis (from robust control theory) to find the largest ranges of misspecifications that can be tolerated in a learning model without compromising convergence to an REE. In addition, we study the cost, in terms of performance in the steady state of a central bank that acts to robustify learnability on the transition path to REE. (Note: This paper contains full-color graphics) JEL Classification: C6, E5E-stability, learnability, Learning, monetary policy, robust control

    Robustifying learnability

    Get PDF
    In recent years, the learnability of rational expectations equilibria (REE) and determinacy of economic structures have rightfully joined the usual performance criteria among the sought-after goals of policy design. Some contributions to the literature, including Bullard and Mitra (2001) and Evans and Honkapohja (2002), have made significant headway in establishing certain features of monetary policy rules that facilitate learning. However a treatment of policy design for learnability in worlds where agents have potentially misspecified their learning models has yet to surface. This paper provides such a treatment. We begin with the notion that because the profession has yet to settle on a consensus model of the economy, it is unreasonable to expect private agents to have collective rational expectations. We assume that agents have only an approximate understanding of the workings of the economy and that their learning the reduced forms of the economy is subject to potentially destabilizing perturbations. The issue is then whether a central bank can design policy to account for perturbations and still assure the learnability of the model. Our test case is the standard New Keynesian business cycle model. For different parameterizations of a given policy rule, we use structured singular value analysis (from robust control theory) to find the largest ranges of misspecifications that can be tolerated in a learning model without compromising convergence to an REE.Robust control ; Monetary policy

    BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games

    Get PDF
    We present BL-WoLF, a framework for learnability in repeated zero-sum games where the cost of learning is measured by the losses the learning agent accrues (rather than the number of rounds). The game is adversarially chosen from some family that the learner knows. The opponent knows the game and the learner's learning strategy. The learner tries to either not accrue losses, or to quickly learn about the game so as to avoid future losses (this is consistent with the Win or Learn Fast (WoLF) principle; BL stands for ``bounded loss''). Our framework allows for both probabilistic and approximate learning. The resultant notion of {\em BL-WoLF}-learnability can be applied to any class of games, and allows us to measure the inherent disadvantage to a player that does not know which game in the class it is in. We present {\em guaranteed BL-WoLF-learnability} results for families of games with deterministic payoffs and families of games with stochastic payoffs. We demonstrate that these families are {\em guaranteed approximately BL-WoLF-learnable} with lower cost. We then demonstrate families of games (both stochastic and deterministic) that are not guaranteed BL-WoLF-learnable. We show that those families, nevertheless, are {\em BL-WoLF-learnable}. To prove these results, we use a key lemma which we derive
    • …
    corecore