2,531 research outputs found

    Japanese Monetary Policy during the Collapse of the Bubble Economy: A View of Policymaking under Uncertainty

    Get PDF
    Focusing on policymaking under uncertainty, we analyze the monetary policy of the Bank of Japan (BOJ) in the early 1990s, when the bubble economy collapsed. Conducting stochastic simulations with a large- scale macroeconomic model of the Japanese economy, we find that the BOJf s monetary policy at that time was essentially optimal under uncertainty about the policy multiplier. On the other hand, we also find that the BOJ's policy was not optimal under uncertainty about inflation dynamics, and that a more aggressive policy response than actually implemented would have been needed. Thus, optimal monetary policy differs greatly depending upon which type of uncertainty is emphasized. Taking into account the fact that overcoming deflation became an important issue from the latter 1990s, it is possible to argue that during the early 1990s the BOJ should have placed greater emphasis on uncertainty about inflation dynamics and implemented a more aggressive monetary policy. The result from a counterfactual simulation indicates that the inflation rate and the real growth rate would have been higher to some extent if the BOJ had implemented a more accommodative policy during the early 1990s. However, the simulation result also suggests that the effects would have been limited, and that an accommodative monetary policy itself would not have changed the overall image of the prolonged stagnation of the Japanese economy during the 1990s.Collapse of the bubble economy; Monetary policy; Uncertainty

    Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits

    Get PDF
    Motivated by applications in energy management, this paper presents the Multi-Armed Risk-Aware Bandit (MARAB) algorithm. With the goal of limiting the exploration of risky arms, MARAB takes as arm quality its conditional value at risk. When the user-supplied risk level goes to 0, the arm quality tends toward the essential infimum of the arm distribution density, and MARAB tends toward the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal value. As a first contribution, this paper presents a theoretical analysis of the MIN algorithm under mild assumptions, establishing its robustness comparatively to UCB. The analysis is supported by extensive experimental validation of MIN and MARAB compared to UCB and state-of-art risk-aware MAB algorithms on artificial and real-world problems.Comment: 16 page

    Approximate Models and Robust Decisions

    Full text link
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that "all models are wrong", but little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification. We then consider formal methods for decision making under model misspecification by quantifying stability of optimal actions to perturbations to the model within a neighbourhood of model space. This neighbourhood is defined in either one of two ways. Firstly, in a strong sense via an information (Kullback-Leibler) divergence around the approximating model. Or using a nonparametric model extension, again centred at the approximating model, in order to `average out' over possible misspecifications. This is presented in the context of recent work in the robust control, macroeconomics and financial mathematics literature. We adopt a Bayesian approach throughout although the methods are agnostic to this position

    Post-selection point and interval estimation of signal sizes in Gaussian samples

    Full text link
    We tackle the problem of the estimation of a vector of means from a single vector-valued observation yy. Whereas previous work reduces the size of the estimates for the largest (absolute) sample elements via shrinkage (like James-Stein) or biases estimated via empirical Bayes methodology, we take a novel approach. We adapt recent developments by Lee et al (2013) in post selection inference for the Lasso to the orthogonal setting, where sample elements have different underlying signal sizes. This is exactly the setup encountered when estimating many means. It is shown that other selection procedures, like selecting the KK largest (absolute) sample elements and the Benjamini-Hochberg procedure, can be cast into their framework, allowing us to leverage their results. Point and interval estimates for signal sizes are proposed. These seem to perform quite well against competitors, both recent and more tenured. Furthermore, we prove an upper bound to the worst case risk of our estimator, when combined with the Benjamini-Hochberg procedure, and show that it is within a constant multiple of the minimax risk over a rich set of parameter spaces meant to evoke sparsity.Comment: 27 pages, 13 figure
    corecore