49,068 research outputs found

    Operational risk management and new computational needs in banks

    Get PDF
    Basel II banking regulation introduces new needs for computational schemes. They involve both optimal stochastic control, and large scale simulations of decision processes of preventing low-frequency high loss-impact events. This paper will first state the problem and present its parameters. It then spells out the equations that represent a rational risk management behavior and link together the variables: Levy processes are used to model operational risk losses, where calibration by historical loss databases is possible ; where it is not the case, qualitative variables such as quality of business environment and internal controls can provide both costs-side and profits-side impacts. Among other control variables are business growth rate, and efficiency of risk mitigation. The economic value of a policy is maximized by resolving the resulting Hamilton-Jacobi-Bellman type equation. Computational complexity arises from embedded interactions between 3 levels: * Programming global optimal dynamic expenditures budget in Basel II context, * Arbitraging between the cost of risk-reduction policies (as measured by organizational qualitative scorecards and insurance buying) and the impact of incurred losses themselves. This implies modeling the efficiency of the process through which forward-looking measures of threats minimization, can actually reduce stochastic losses, * And optimal allocation according to profitability across subsidiaries and business lines. The paper next reviews the different types of approaches that can be envisaged in deriving a sound budgetary policy solution for operational risk management, based on this HJB equation. It is argued that while this complex, high dimensional problem can be resolved by taking some usual simplifications (Galerkin approach, imposing Merton form solutions, viscosity approach, ad hoc utility functions that provide closed form solutions, etc.) , the main interest of this model lies in exploring the scenarios in an adaptive learning framework ( MDP, partially observed MDP, Q-learning, neuro-dynamic programming, greedy algorithm, etc.). This makes more sense from a management point of view, and solutions are more easily communicated to, and accepted by, the operational level staff in banks through the explicit scenarios that can be derived. This kind of approach combines different computational techniques such as POMDP, stochastic control theory and learning algorithms under uncertainty and incomplete information. The paper concludes by presenting the benefits of such a consistent computational approach to managing budgets, as opposed to a policy of operational risk management made up from disconnected expenditures. Such consistency satisfies the qualifying criteria for banks to apply for the AMA (Advanced Measurement Approach) that will allow large economies of regulatory capital charge under Basel II Accord.REGULAR - Operational risk management, HJB equation, Levy processes, budget optimization, capital allocation

    Why Inflation Rose and Fell: Policymakers' Beliefs and US Postwar Stabilization Policy

    Get PDF
    This paper provides an explanation for the run-up of U.S. inflation in the 1960s and 1970s and the sharp disinflation in the early 1980s, which standard macroeconomic models have difficulties in addressing. I present a model in which rational policymakers learn about the behavior of the economy in real time and set stabilization policy optimally, conditional on their current beliefs. The steady state associated with the self-confirming equilibrium of the model is characterized by low inflation. However, prolonged episodes of high inflation ending with rapid disinflations can occur when policymakers underestimate both the natural rate of unemployment and the persistence of inflation in the Phillips curve. I estimate the model using likelihood methods. The estimation results show that the model accounts remarkably well for the evolution of policymakers' beliefs, stabilization policy and the postwar behavior of inflation and unemployment in the United States.

    A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

    Get PDF
    This study introduces PV-RNN, a novel variational RNN inspired by the predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how can latent variables learn meaningful representations and how can the inference model transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation, rather than external inputs during the forward computation, are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data. We test the model on two datasets with probabilistic structures and show that with high values of the meta-prior the network develops deterministic chaos through which the data's randomness is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values, and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.Comment: The paper is accepted in Neural Computatio

    Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning

    Full text link
    Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents. Videos can be found at https://sites.google.com/view/mbm

    Inflation Targeting and Q Volatility in Small Open Economies

    Get PDF
    This paper examines the welfare implications of managing Q with inflation targeting by monetary authorities who have to "learn" the laws of motion for both inflation and the rate of growth of Q. Our results show that the Central Bank can achieve great success in reducing the volatility of GDP growth with basically the same inflation volatility, if it incorporates this additional target into its policy regime. However, the welfare effects are generally lower, in terms of consumption, when the monetary authorithy reacts to Q growth as well as inflationTobin's Q, monetary policy, learning

    The dynamic effects of currency union on trade

    Get PDF
    A currency union’s ability to increase international trade is one of the most debated questions in international macroeconomics. This paper studies the dynamics of these trade effects over time. First, empirical work with data from the European Monetary Union finds that the extensive margin of trade (entry of new firms or goods) responds several years ahead of overall trade volume and actual implementation of the monetary union. This implies a fall at the intensive margin (previously traded goods) in the run-up to EMU. A dynamic stochastic general equilibrium model of trade studies the announcement of a future monetary union as a news shock lowering future trade costs, and finds that the early entry of new firms in anticipation is explainable as a rational forward-looking response under certain conditions. Required elements are sunk costs of exporting and ex-ante heterogeneity among firms. The findings help identify which types of trading frictions are reduced by adopting a currency union. Findings also indicate that a significant fraction of the welfare gains from a monetary union are based upon expectations for the future, so that continued gains depend upon long-term credibility of the union

    On the Solution of Markov-switching Rational Expectations Models

    Get PDF
    This paper describes a method for solving a class of forward-looking Markov-switching Rational Expectations models under noisy measurement, by specifying the unobservable expectations component as a general-measurable function of the observable states of the system, to be determined optimally via stochastic control and filtering theory. Solution existence is proved by setting this function to the regime-dependent feedback control minimizing the mean-square deviation of the equilibrium path from the corresponding perfect-foresight autoregressive Markov jump state motion. As the exact expression of the conditional (rational) expectations term is derived both in finite and infinite horizon model formulations, no (asymptotic) stationarity assumptions are needed to solve forward the system, for only initial values knowledge is required. A simple sufficient condition for the mean-square stability of the obtained rational expectations equilibrium is also provided.Rational Expectations, Markov-switching dynamic systems, Dynamic programming, Time-varying Kalman filter
    • …
    corecore