952 research outputs found

    Bayesian and adaptive optimal policy under model uncertainty

    Get PDF
    We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and policymakers optimally learn from their observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but little with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide a simple example to illustrate the role of learning and experimentation in an MJLQ framework. JEL Classification: E42, E52, E5

    Optimal monetary policy under uncertainty: a Markov jump-linear-quadratic approach

    Get PDF
    This paper studies the design of optimal monetary policy under uncertainty using a Markov jump-linear-quadratic (MJLQ) approach. To approximate the uncertainty that policymakers face, the authors use different discrete modes in a Markov chain and take mode-dependent linear-quadratic approximations of the underlying model. This allows the authors to apply a powerful methodology with convenient solution algorithms that they have developed. They apply their methods to analyze the effects of uncertainty and potential gains from experimentation for two sources of uncertainty in the New Keynesian Phillips curve. The examples highlight that learning may have sizable effects on losses and, although it is generally beneficial, it need not always be so. The experimentation component typically has little effect and in some cases it can lead to attenuation of policy.Monetary policy ; Econometric models

    Optimal Monetary Policy Under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic Approach

    Get PDF
    We study the design of optimal monetary policy under uncertainty in a dynamic stochastic general equilibrium model. We use a Markov jump-linear-quadratic (MJLQ) approach to study policy design, proxying the uncertainty by different discrete modes in a Markov chain, and by taking mode-dependent linear-quadratic approximations of the underlying model. This allows us to apply a powerful methodology with convenient solution algorithms that we have developed. We apply our methods to a benchmark new-Keynesian model, analyzing how policy is affected by uncertainty, and how learning and active testing affect policy and losses.

    Bayesian and Adaptive Optimal Policy under Model Uncertainty

    Get PDF
    We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and policymakers optimally learn from their observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but little with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide a simple example to illustrate the role of learning and experimentation in an MJLQ framework.Optimal Monetary Policy, Learning, Recursive Saddlepoint Method

    Federal Regulatory Responses to the Prescription Opioid Crisis: Too Little, Too Late?

    Get PDF
    Part I of this Article suggests that the medical establishment shares more blame for the crisis than many commentators seem to appreciate. Part II canvasses a variety of ways in which the federal government has responded to the opioid problem during the last few years before delving more deeply into the FDA’s role in the mess, assessing the different tools that it has tried to use as well as some that it failed to employ. This Article concludes that the agency should have allowed only a narrowly defined subset of physicians to prescribe opioid analgesics, even though the medical community would have pitched a fit about any such an intrusion on its prerogatives, to say nothing of the drug manufacturers aghast at the prospect of far more modest sales. Greater use of such restrictions on distribution might have worked to nip this disaster in the bud, and it needs more serious consideration by the FDA before the next one comes down the pike

    This Is Your Products Liability Restatement on Drugs

    Get PDF

    Turn the Beat Around: Deactivating Implanted Cardiac-assist Devices

    Get PDF

    State Affronts to Federal Primacy in the Licensure of Pharmaceutical Products

    Get PDF
    Article published in the Michigan State Law Review
    corecore