162,654 research outputs found

    Learning Robust Deep Equilibrium Models

    Full text link
    Deep equilibrium (DEQ) models have emerged as a promising class of implicit layer models in deep learning, which abandon traditional depth by solving for the fixed points of a single nonlinear layer. Despite their success, the stability of the fixed points for these models remains poorly understood. Recently, Lyapunov theory has been applied to Neural ODEs, another type of implicit layer model, to confer adversarial robustness. By considering DEQ models as nonlinear dynamic systems, we propose a robust DEQ model named LyaDEQ with guaranteed provable stability via Lyapunov theory. The crux of our method is ensuring the fixed points of the DEQ models are Lyapunov stable, which enables the LyaDEQ models to resist minor initial perturbations. To avoid poor adversarial defense due to Lyapunov-stable fixed points being located near each other, we add an orthogonal fully connected layer after the Lyapunov stability module to separate different fixed points. We evaluate LyaDEQ models on several widely used datasets under well-known adversarial attacks, and experimental results demonstrate significant improvement in robustness. Furthermore, we show that the LyaDEQ model can be combined with other defense methods, such as adversarial training, to achieve even better adversarial robustness

    Robust learning stability with operational monetary policy rules

    Get PDF
    We consider the robust stability of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, ie rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.commitment; interest-rate setting; adaptive learning; stability; determinacy

    Robust Learning Stability with Operational Monetary Policy Rules

    Get PDF
    We consider “robust stability” of a rational expectations equilibrium, which we define as stability under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that for operational forms of policy rules, i.e. rules that do not depend on contemporaneous values of endogenous aggregate variables, many interest-rate rules do not exhibit robust stability. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some reaction functions we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.Commitment, interest-rate setting, adaptive learning, stability, determinacy.

    Robust Learning Stability with Operational Monetary Policy Rules

    Get PDF
    We consider robust stability under learning of alternative interest-rate rules. By “robust stability” we mean stability of the rational expectations equilibrium, under discounted (constant gain) least-squares learning, for a range of gain parameters. We find that many interest-rate rules are not robust, in this sense, when operational forms of policy rules are employed. Rules are considered operational if they do not depend on contemporaneous values of endogenous aggregate variables. We consider a variety of interest-rate rules, including instrument rules, optimal reaction functions under discretion or commitment, and rules that approximate optimal policy under commitment. For some of the rules that aim to achieve optimal policy, we allow for an interest-rate stabilization motive in the policy objective. The expectations-based rules proposed in Evans and Honkapohja (2003, 2006) deliver robust learning stability. In contrast, many proposed alternatives become unstable under learning even at small values of the gain parameter.

    Probably Approximately Correct Nash Equilibrium Learning

    Full text link
    We consider a multi-agent noncooperative game with agents' objective functions being affected by uncertainty. Following a data driven paradigm, we represent uncertainty by means of scenarios and seek a robust Nash equilibrium solution. We treat the Nash equilibrium computation problem within the realm of probably approximately correct (PAC) learning. Building upon recent developments in scenario-based optimization, we accompany the computed Nash equilibrium with a priori and a posteriori probabilistic robustness certificates, providing confidence that the computed equilibrium remains unaffected (in probabilistic terms) when a new uncertainty realization is encountered. For a wide class of games, we also show that the computation of the so called compression set - a key concept in scenario-based optimization - can be directly obtained as a byproduct of the proposed solution methodology. Finally, we illustrate how to overcome differentiability issues, arising due to the introduction of scenarios, and compute a Nash equilibrium solution in a decentralized manner. We demonstrate the efficacy of the proposed approach on an electric vehicle charging control problem.Comment: Preprint submitted to IEEE Transactions on Automatic Contro

    Learning Dynamics in Monetary Policy: The Robustness of an Aggressive Price Stabilizing Policy

    Get PDF
    This paper investigates the effect of an aggressive inflation stabilizing monetary policy on the ability of agents to reach a rational expectations equilibrium for inflation and output. Using an adaptive learning framework, we develop a model that combines a real wage contracting rigidity with an interest rate rule. We show that an AR(1) equilibrium requires more aggressive monetary policy to achieve both determinacy and learnability. This model and policy findings contrast with Bullard and Mitra’s [Determinacy, learnability and monetary policy inertia (2001); Journal of Monetary Economics 49 (2002) 1105] model (no inflation persistence) and policy findings (less aggressive policy). These results suggest that aggressive policy is robust in different model specifications

    Learning Dynamics in Monetary Policy: The Robustness of an Aggressive Price Stabilizing Policy

    Get PDF
    This paper investigates the effect of an aggressive inflation stabilizing monetary policy on the ability of agents to reach a rational expectations equilibrium for inflation and output. Using an adaptive learning framework, we develop a model that combines a real wage contracting rigidity with an interest rate rule. We show that an AR(1) equilibrium requires more aggressive monetary policy to achieve both determinacy and learnability. This model and policy findings contrast with Bullard and Mitra’s [Determinacy, learnability and monetary policy inertia (2001); Journal of Monetary Economics 49 (2002) 1105] model (no inflation persistence) and policy findings (less aggressive policy). These results suggest that aggressive policy is robust in different model specifications
    • …
    corecore