836 research outputs found

    On Measuring the Welfare Cost of Business Cycles

    Get PDF
    Lucas (1987) argues that the gain from eliminating aggregate fluctuations is trivial. Following Lucas, a number of researchers have altered assumptions on preferences and found that the gain from eliminating business cycles are potentially very large. However, in these exercises little discipline is placed on preference parameters. This paper estimates the welfare cost of business cycles, allowing for potential time-non-separabilities in preferences, where discipline is placed on the choice of preference parameters by requiring that the preferences be consistent with observed fluctuations in a model of business cycles. That is, a theoretical real business cycle world is constructed and the representative agent is then placed in this world. The agent responds optimally to exogenous shocks, given the frictions in the economy. The agent's preference parameters, along with other structural parameters, are estimated using a Bayesian procedure involving Markov Chain Monte Carlo methods. Two main results emerge from the paper. First, the form for the time-non-separability estimated in this paper is very different than the forms suggested and used elsewhere in the literature. Second, the welfare cost of business cycles is close to Lucas's estimate.Business Cycles, Nonseparable preferences, Welfare cost, Markov Chain Monte Carlo

    On Measuring the Welfare Cost of Business Cycles

    Get PDF
    Lucas (1987) argues that the gain from eliminating aggregate fluctuations is trivial. Following Lucas, a number of researchers have altered assumptions on preferences and found that the gain from eliminating business cycles are potentially very large. However, in these exercises little discipline is placed on preference parameters. This paper estimates the welfare cost of business cycles, allowing for potential time-non-separabilities in preferences, where discipline is placed on the choice of preference parameters by requiring that the preferences be consistent with observed fluctuations in a model of business cycles. That is, a theoretical real business cycle world is constructed and the representative agent is then placed in this world. The agent responds optimally to exogenous shocks, given the frictions in the economy. The agent's preference parameters, along with other structural parameters, are estimated using a Bayesian procedure involving Markov Chain Monte Carlo methods. Two main results emerge from the paper. First, the form for the time-non-separability estimated in this paper is very different than the forms suggested and used elsewhere in the literature. Second, the welfare cost of business cycles is close to Lucas's estimate.

    Learning Risk Preferences in Markov Decision Processes: an Application to the Fourth Down Decision in Football

    Full text link
    For decades, National Football League (NFL) coaches' observed fourth down decisions have been largely inconsistent with prescriptions based on statistical models. In this paper, we develop a framework to explain this discrepancy using a novel inverse optimization approach. We model the fourth down decision and the subsequent sequence of plays in a game as a Markov decision process (MDP), the dynamics of which we estimate from NFL play-by-play data from the 2014 through 2022 seasons. We assume that coaches' observed decisions are optimal but that the risk preferences governing their decisions are unknown. This yields a novel inverse decision problem for which the optimality criterion, or risk measure, of the MDP is the estimand. Using the quantile function to parameterize risk, we estimate which quantile-optimal policy yields the coaches' observed decisions as minimally suboptimal. In general, we find that coaches' fourth-down behavior is consistent with optimizing low quantiles of the next-state value distribution, which corresponds to conservative risk preferences. We also find that coaches exhibit higher risk tolerances when making decisions in the opponent's half of the field than in their own, and that league average fourth down risk tolerances have increased over the seasons in our data.Comment: 33 pages, 9 figure

    Entropic Risk for Turn-Based Stochastic Games

    Get PDF
    Entropic risk (ERisk) is an established risk measure in finance, quantifying risk by an exponential re-weighting of rewards. We study ERisk for the first time in the context of turn-based stochastic games with the total reward objective. This gives rise to an objective function that demands the control of systems in a risk-averse manner. We show that the resulting games are determined and, in particular, admit optimal memoryless deterministic strategies. This contrasts risk measures that previously have been considered in the special case of Markov decision processes and that require randomization and/or memory. We provide several results on the decidability and the computational complexity of the threshold problem, i.e. whether the optimal value of ERisk exceeds a given threshold. In the most general case, the problem is decidable subject to Shanuel’s conjecture. If all inputs are rational, the resulting threshold problem can be solved using algebraic numbers, leading to decidability via a polynomial-time reduction to the existential theory of the reals. Further restrictions on the encoding of the input allow the solution of the threshold problem in NP∩coNP. Finally, an approximation algorithm for the optimal value of ERisk is provided

    A Semiparametric Estimator for Dynamic Optimization Models

    Get PDF
    We develop a new estimation methodology for dynamic optimization models with unobserved state variables Our approach is semiparametric in the sense of not requiring explicit parametric assumptions to be made concerning the distribution of these unobserved state variables We propose a two-step pairwise-difference estimator which exploits two common features of dynamic optimization problems: (1) the weak monotonicity of the agent's decision (policy) function in the unobserved state variables conditional on the observed state variables; and (2) the state-contingent nature of optimal decision-making which implies that conditional on the observed state variables the variation in observed choices across agents must be due to randomness in the unobserved state variables across agents We apply our estimator to a model of dynamic competitive equilibrium in the market for milk production quota in Ontario Canada

    UNCOVERING THE HIT-LIST FOR SMALL INFLATION TARGETERS: A BAYESIAN STRUCTURAL ANALYSIS

    Get PDF
    We estimate underlying macroeconomic policy objectives of three of the earliest explicit inflation targeters - Australia, Canada and New Zealand - within the context of a small open economy DSGE model. We assume central banks set policy optimally, such that we can reverse engineer policy objectives from observed time series data. We find that none of the central banks show a concern for stabilizing the real exchange rate. All three central banks share a concern for minimizing the volatility in the change in the nominal interest rate. The Reserve Bank of Australia places the most weight on minimizing the deviation of output from trend. Joint tests of the posterior distributions of these policy preference parameters suggest that the central banks are very similar in their overall objective.

    Principled Data-Driven Decision Support for Cyber-Forensic Investigations

    Full text link
    In the wake of a cybersecurity incident, it is crucial to promptly discover how the threat actors breached security in order to assess the impact of the incident and to develop and deploy countermeasures that can protect against further attacks. To this end, defenders can launch a cyber-forensic investigation, which discovers the techniques that the threat actors used in the incident. A fundamental challenge in such an investigation is prioritizing the investigation of particular techniques since the investigation of each technique requires time and effort, but forensic analysts cannot know which ones were actually used before investigating them. To ensure prompt discovery, it is imperative to provide decision support that can help forensic analysts with this prioritization. A recent study demonstrated that data-driven decision support, based on a dataset of prior incidents, can provide state-of-the-art prioritization. However, this data-driven approach, called DISCLOSE, is based on a heuristic that utilizes only a subset of the available information and does not approximate optimal decisions. To improve upon this heuristic, we introduce a principled approach for data-driven decision support for cyber-forensic investigations. We formulate the decision-support problem using a Markov decision process, whose states represent the states of a forensic investigation. To solve the decision problem, we propose a Monte Carlo tree search based method, which relies on a k-NN regression over prior incidents to estimate state-transition probabilities. We evaluate our proposed approach on multiple versions of the MITRE ATT&CK dataset, which is a knowledge base of adversarial techniques and tactics based on real-world cyber incidents, and demonstrate that our approach outperforms DISCLOSE in terms of techniques discovered per effort spent

    Predicting continuous conflict perception with Bayesian Gaussian processes

    Get PDF
    Conflict is one of the most important phenomena of social life, but it is still largely neglected by the computing community. This work proposes an approach that detects common conversational social signals (loudness, overlapping speech, etc.) and predicts the conflict level perceived by human observers in continuous, non-categorical terms. The proposed regression approach is fully Bayesian and it adopts Automatic Relevance Determination to identify the social signals that influence most the outcome of the prediction. The experiments are performed over the SSPNet Conflict Corpus, a publicly available collection of 1430 clips extracted from televised political debates (roughly 12 hours of material for 138 subjects in total). The results show that it is possible to achieve a correlation close to 0.8 between actual and predicted conflict perception

    A RISK-INFORMED DECISION-MAKING METHODOLOGY TO IMPROVE LIQUID ROCKET ENGINE PROGRAM TRADEOFFS

    Get PDF
    This work provides a risk-informed decision-making methodology to improve liquid rocket engine program tradeoffs with the conflicting areas of concern affordability, reliability, and initial operational capability (IOC) by taking into account psychological and economic theories in combination with reliability engineering. Technical program risks are associated with the number of predicted failures of the test-analyze-and-fix (TAAF) cycle that is based on the maturity of the engine components. Financial and schedule program risks are associated with the epistemic uncertainty of the models that determine the measures of effectiveness in the three areas of concern. The affordability and IOC models' inputs reflect non-technical and technical factors such as team experience, design scope, technology readiness level, and manufacturing readiness level. The reliability model introduces the Reliability- As-an-Independent-Variable (RAIV) strategy that aggregates fictitious or actual hotfire tests of testing profiles that differ from the actual mission profile to estimate the system reliability. The main RAIV strategy inputs are the physical or functional architecture of the system, the principal test plan strategy, a stated reliability-bycredibility requirement, and the failure mechanisms that define the reliable life of the system components. The results of the RAIV strategy, which are the number of hardware sets and number of hot-fire tests, are used as inputs to the affordability and the IOC models. Satisficing within each tradeoff is attained by maximizing the weighted sum of the normalized areas of concern subject to constraints that are based on the decision-maker's targets and uncertainty about the affordability, reliability, and IOC using genetic algorithms. In the planning stage of an engine program, the decision variables of the genetic algorithm correspond to fictitious hot-fire tests that include TAAF cycle failures. In the program execution stage, the RAIV strategy is used as reliability growth planning, tracking, and projection model. The main contributions of this work are the development of a comprehensible and consistent risk-informed tradeoff framework, the RAIV strategy that links affordability and reliability, a strategy to define an industry or government standard or guideline for liquid rocket engine hot-fire test plans, and an alternative to the U.S. Crow/AMSAA reliability growth model applying the RAIV strategy
    corecore