355,869 research outputs found

    Learning Rational Functions

    Get PDF
    International audienceRational functions are transformations from words to words that can be defined by string transducers. Rational functions are also captured by deterministic string transducers with lookahead. We show for the first time that the class of rational functions can be learned in the limit with polynomial time and data, when represented by string transducers with lookahead in the diagonal-minimal normal form that we introduce

    A Paradox of Environmental Awareness Campaigns

    Get PDF
    We build a workable game of common-property resource extraction under rational Bayesian learning about the renewal prospects of a resource. We uncover the impact of exogenously shifting the prior beliefs of each player on the response functions of others. What we find about the role of environmental conservation campaigns is paradoxical. To the extent that such campaigns instill overly high pessimism about the potential of natural resources to reproduce, they create anti-conservation incentives: anyone having exploitation rights becomes inclined to consume more of the resource earlier, before others overexploit, and before the resource's stock is reduced to lower levels.renewable resources; resource exploitation; non-cooperative dynamic games; Bayesian learning; stochastic games; commons; rational learning; uncertainty; beliefs

    The intersection of two halfspaces has high threshold degree

    Full text link
    The threshold degree of a Boolean function f:{0,1}^n->{-1,+1} is the least degree of a real polynomial p such that f(x)=sgn p(x). We construct two halfspaces on {0,1}^n whose intersection has threshold degree Theta(sqrt n), an exponential improvement on previous lower bounds. This solves an open problem due to Klivans (2002) and rules out the use of perceptron-based techniques for PAC learning the intersection of two halfspaces, a central unresolved challenge in computational learning. We also prove that the intersection of two majority functions has threshold degree Omega(log n), which is tight and settles a conjecture of O'Donnell and Servedio (2003). Our proof consists of two parts. First, we show that for any nonconstant Boolean functions f and g, the intersection f(x)^g(y) has threshold degree O(d) if and only if ||f-F||_infty + ||g-G||_infty < 1 for some rational functions F, G of degree O(d). Second, we settle the least degree required for approximating a halfspace and a majority function to any given accuracy by rational functions. Our technique further allows us to make progress on Aaronson's challenge (2008) and contribute strong direct product theorems for polynomial representations of composed Boolean functions of the form F(f_1,...,f_n). In particular, we give an improved lower bound on the approximate degree of the AND-OR tree.Comment: Full version of the FOCS'09 pape

    Initial Expectations in New Keynesian Models with Learning

    Get PDF
    This paper examines how the estimation results for a standard New Keynesian model with constant gain least squares learning is sensitive to the stance taken on agents beliefs at the beginning of the sample. The New Keynesian model is estimated under rational expectations and under learning with three different frameworks for how expectations are set at the beginning of the sample. The results show that initial beliefs can have an impact on the predictions of an estimated model; in fact previous literature has exposed this sensitivity to explain the changing volatilities of output and inflation in the post-war United States. The results indicate statistical evidence for adaptive learning, however the rational expectations framework performs at least as well as the learning frameworks, if not better, in in-sample and out-of-sample forecast error criteria. Moreover, learning is not found to better explain time varying macroeconomic volatility any better than rational expectations. Finally, impulse response functions from the estimated models show that the dynamics following a structural shock can depend crucially on how expectations are initialized and what information agents are assumed to have.Learning, expectations, New Keynesian model, maximum likelihood

    Rational neural networks

    Full text link
    We consider neural networks with rational activation functions. The choice of the nonlinear activation function in deep learning architectures is crucial and heavily impacts the performance of a neural network. We establish optimal bounds in terms of network complexity and prove that rational neural networks approximate smooth functions more efficiently than ReLU networks with exponentially smaller depth. The flexibility and smoothness of rational activation functions make them an attractive alternative to ReLU, as we demonstrate with numerical experiments.Comment: 21 pages, 7 figure

    Robust learning stability with operational monetary policy rules

    Get PDF
    We consider the robust stability of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, ie rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.commitment; interest-rate setting; adaptive learning; stability; determinacy

    Robust Learning Stability with Operational Monetary Policy Rules

    Get PDF
    We consider “robust stability” of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, i.e. rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.Commitment, interest-rate setting, adaptive learning, stability, determinacy.

    Efficiently Learning from Revealed Preference

    Full text link
    In this paper, we consider the revealed preferences problem from a learning perspective. Every day, a price vector and a budget is drawn from an unknown distribution, and a rational agent buys his most preferred bundle according to some unknown utility function, subject to the given prices and budget constraint. We wish not only to find a utility function which rationalizes a finite set of observations, but to produce a hypothesis valuation function which accurately predicts the behavior of the agent in the future. We give efficient algorithms with polynomial sample-complexity for agents with linear valuation functions, as well as for agents with linearly separable, concave valuation functions with bounded second derivative.Comment: Extended abstract appears in WINE 201
    • …
    corecore