254 research outputs found
Cross-border lending contagion in multinational banks
We study both theoretically and empirically the inter- dependence of lending decisions in different country branches of a multinational bank. First, we model a bank that delegates the management of its foreign unit to a local manager with non-transferable skills. The bank differs from other international investors due to a liquidity threshold which induces a depositor run and a regulatory action if attained. A separate channel of shock propagation exists since lending decisions are influenced by delegation and precautionary motives. This can entail “contagion”, i.e. parallel reactions of the loan volumes in both countries to the parent bank home country disturbance. Second, we look for the presence of lending contagion by panel regression methods in a large sample of multinational banks and their affiliates. We find that the majority of multinational banks behave in line with contagion effect. In addition, the presence of contagion seems to be related to the geographical location of subsidiaries. JEL Classification: F37, G21, G28, G31delegation, diversification, lending contagion, Multinational bank, panel regression
Neural networks in economic modelling:An empirical study
This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a statistical technique that implements a model-free regression strategy. Model-free regression seems particularly useful in situations where economic theory cannot provide sensible model specifications. Neural networks are applied in three case studies: modelling house prices; predicting the production of new mortgage loans; predicting the foreign exchange rates. From these case studies is concluded that neural networks are a valuable addition to the econometrician's toolbox, but that they are no panacea
Model predictive controller tuning by machine learning and ordinal optimisation
While for the past several decades model predictive control (MPC) has been an established control strategy in chemical process industries, more recently there has been increased collaboration in MPC research between academia and automotive companies. Despite the promising work thus far, one particular challenge facing the widespread adoption of MPC in the automotive industry is the increased calibration requirement. The focus of the research in this thesis is to develop methods towards reducing the calibration effort in designing and implementing MPC in practice. The research is tailored by application to offline tuning of quadratic-cost MPC for an automotive diesel air-path, to address the limited time-availability to perform online tuning experiments.
Human preferences can be influential in automotive engine controller tuning. Some earlier work has proposed a machine learning controller tuning framework (MLCTF), which learns preferences from numeric data labelled by human experts, and as such, these learned preferences can be replicated in automated offline tuning. Work done in this thesis extends this capability by allowing for preferences to be learned from pairwise comparison data, with monotonicity constraints in the features. Two methods are proposed to address this: 1) an algorithm based around Gaussian process regression; and 2) a Bayesian estimation procedure using a Dirichlet prior. These methods are successfully demonstrated in learning monotonicity-constrained utility functions in time-domain features from data consisting of pairwise rankings for diesel air-path trajectories.
The MLCTF also constitutes a plant model, yet there will typically be some uncertainty in an engine model, especially if it has been identified from data collected with a limited amount of experimentation time. To address this, an active learning framework is proposed for selection of the next operating points in the design of experiments, for identifying linear parameter-varying systems. The approach is based on exploiting the probabilistic features of Gaussian process regression to quantify the overall model uncertainty across locally identified models, resulting in a flexible methodology which accommodates for various techniques to be applied for estimation of local linear models and their corresponding uncertainty. The framework is applied to the identification of a diesel engine air-path model, and it is demonstrated that measures of model uncertainty can be quantified and subsequently reduced.
To make the most of the limited availability for online tuning experiments, an ordinal optimisation (OO) approach is proposed, which seeks to ensure that offline tuned controllers can perform acceptably well, once tested online with the physical system. Via the use of copula models, an OO problem is formulated to be compatible with the tuning of controllers over an uncountable search space, such as quadratic-cost MPC. In particular, results are obtained which formally characterise the copula dependence conditions required for the OO success probability to be non-decreasing in the number of offline controllers sampled during OO.
A gain-scheduled MPC architecture was designed for the diesel air-path, and implemented on an engine control unit (ECU). The aforementioned non-decreasing properties of the OO success probability are then specialised to tuning gain-scheduled controller architectures. Informed by these developments, the MPC architecture was firstly tuned offline via OO, and then tested online with an experimental diesel engine test rig, over various engine drive-cycles. In the experimental results, it was found that some offline tuned controllers outperformed a manually tuned baseline MPC, the latter which has comparable performance to proprietary production controllers. Upon additional manual tuning online, the performance of the offline tuned controllers could also be further refined, which illustrates how offline tuning via OO may complement online tuning approaches.
Lastly, using an analytic lower bound developed for OO under a Gaussian copula model, a sequential learning algorithm is developed to address a probabilistically robust offline controller tuning problem. The algorithm is formally proven to yield a controller which meets a specified probabilistic performance specification, assuming that the underlying copula is not too unfavourably far from a Gaussian copula. It is demonstrated in a simulation study that the algorithm is able to successfully tune a single controller to meet a desired performance threshold, even in the presence of probabilistic uncertainty in the diesel engine model. This is applied to two case studies: 1) `hot-starting' an online tuning procedure; and 2) tuning for uncertainty inherent across a fleet of vehicles
Four Essays in Econometrics and Macroeconomics
Chapter 1 proposes simple and robust diagnostic tests for spatial dependence, specifically for spatial error autocorrelation and spatial lag dependence. The idea of our tests is to reformulate the testing problem such that the outer product of gradients (OPG)-variant of the LM test can be employed. Our versions of the tests are based on simple auxiliary regressions, where ordinary regression t and F-statistics can be used to test for spatial autocorrelation and lag dependence. Monte Carlo simulations show that while, under homoskedasticity, our tests perform similarly to the established LM tests, the latter suffer from severe size distortions under heteroskedasticity. Therefore our approach gives practitioners an easy to implement and robust alternative to existing tests. Chapter 2 proposes various tests for serial correlation in fixed-effects panel data regression models with a small number of time periods. First, a simplified version of the test for serial correlation suggested by Wooldridge (2002) and Drukker (2003) is considered. The second test is based on the LM statistic suggested by Baltagi and Li (1995), and the third test is a modification of the classical Durbin-Watson statistic. Under the null hypothesis of no serial correlation, all tests possess a standard normal limiting distribution as N to infinity and T is fixed. Analyzing the local power of the tests, we find that the LM statistic has superior power properties. Furthermore, a generalization to test for autocorrelation up to some given lag order and a test statistic that is robust against time dependent heteroskedasticity are proposed. In chapter 3, we analyze the role of policy risk in explaining business cycle fluctuations by using an estimated New Keynesian model featuring policy risk as well as uncertainty about technology. The aftermath of the financial and economic crisis is clearly characterized by extraordinary uncertainty regarding U.S. economic policy. Hence, the argument that policy risk, i.e. uncertainty about monetary and fiscal policy, has been holding back the economic recovery in the U.S. during the Great Recession has a large popular appeal. But the empirical literature is still inconclusive with respect to the aggregate effects of (mostly TFP) uncertainty. Studies using different proxies and identification schemes to uncover the effects of uncertainty producing a variety of results. We analyze the role of policy risk in explaining business cycle fluctuations by using an estimated New Keynesian model featuring policy risk as well as uncertainty about technology. We directly measure uncertainty from aggregate time series using Sequential Monte Carlo Methods. While we find considerable evidence of policy risk in the data, we show that the "pure uncertainty"-effect of policy risk is unlikely to play a major role in business cycle fluctuations. In the estimated model, output effects are relatively small due to i) dampening general equilibrium effects that imply a low amplification and ii) counteracting partial effects of uncertainty. Finally, we show that policy risk has effects that are an order of magnitude larger than the ones of uncertainty about aggregate TFP. Central banks regularly communicate about financial stability issues, by publishing Financial Stability Reports (FSRs) and through speeches and interviews. Chapter 4 asks how such communications affect financial markets. For that purpose, we construct a unique and novel database on CB communication comprising more than 1000 releases of FSRs and speeches/interviews by central bank governors from 37 central banks over a time period from 1996 to 2009, i.e. spanning nearly one and a half decades. The degree of optimism that is expressed in these communications is determined using a computerized textual-analysis software. We then use an event study approach to analyze how financial sector stock indices react to the release of such communication. The findings suggest that FSRs have a significant and potentially long-lasting effect on stock market returns. At the same time, they tend to reduce stock market volatility. Speeches and interviews, in contrast, have little effect on market returns and do not generate a volatility reduction during tranquil times. However, they had a substantial effect during the 2007-10 financial crisis. It seems that financial stability communication by central banks are perceived by markets to contain relevant information, underlining the importance of differentiating between communication tools, their content, and the environment in which they are employed
Recommended from our members
Temporal Issues in Market Inefficiency in asset prices with an emphasis on commodities
This summary provides an overview of the contributions made in this thesis to the literature. No references are included in the summary; these can be found in the Bibliography on page 156. This dissertation consists of 6 chapters. The first chapter acts as an introduction to the thesis and discusses the central theme of the dissertation along with providing a preview of what to expect in the following chapters. The contributions of the different chapters vary. Chapter 2 is a more introductory chapter and its contributions are perhaps less consequential than those in Chapters 3-5.
Chapter 2 makes contributions to the literature on testing for explosive roots or bubbles. By modifying the Bhargava test statistic, we show in Chapter 2 that the Bhargava test can address earlier criticisms that had been cited against it; namely that it has low power when multiple bubbles are present in a particular series. Through introducing a rolling window approach, we are able to address that criticism and show that the modified Bhargava test statistic achieves better power. We compare and contrast the power of the modified test with the popular GSADF test statistic which has recently become popular in bubble testing literature. Another contribution made in this chapter is the application of these tests to a data set comprising of 25 commodities. As at the writing of the chapter this was believed to be a first attempt to perform bubble testing on a comprehensive commodity data set. Since commodities are often deemed to be targets of speculative behaviour, they are a natural universe for testing notions of market efficiency as they tend to go through different regimes through natural economic processes. Using both tests we are able to detect bubbles in similar periods with most of them being concentrated around the two oil price crises (1972-73 and 1979-80) and the financial crisis (2005-2007). Our conclusion is that the modified Bhargava statistic works better than the original statistic and can be used to complement the results of other statistics.
The major contributions of Chapter 3 and 4 are the introduction of different methodologies that enable the user to assess how often asset markets are efficient. In Chapter 3 we argue that commodity prices can be estimated using switching-regression models including hidden Markov state-switching models. Instead of estimating Markov transition matrices directly from the estimation procedure, we estimate the transition matrix separately using unit root tests. By restricting the transition matrix to our estimated matrix and then estimating a Markov state-switching regression we show that we get more accurate smoothed probabilities i.e. a high probability is assigned to explosive states when the price was actually explosive and a high probability is assigned to the random walk/efficient state when the price exhibited efficient behaviour. This methodology is then extended to the three state case and it is argued that the transition matrices estimated this way will inform us of how often commodity markets are efficient. The methodology is empirically applied to non-ferrous metals with particular attention to Copper; we believe this is an additional contribution of the article. Chapter 3 also presents a partial equilibrium model which leads to an estimable reduced form expression for commodities and thereby motivates estimation by Markov switching-regressions.
Three major contributions are made in Chapter 4. Firstly, we make a theoretical contribution to the literature on threshold auto-regressive models with exogenous triggers. Conditions for the existence of a mean and variance when a series follows a threshold auto-regressive (TAR) process with an exogenous trigger are derived. The second contribution is the use of TAR simulations to show that the tests which try to detect bubbles in asset prices lose a substantial amount of power when the asset price spends some time in the mean reverting state in addition to being in the explosive and random walk states. The third contribution of this article is the provision of a framework using TAR models which acts as a metric for market efficiency. By considering three states, an efficient/random walk state, a mean reverting state and an explosive state, we show that estimating asset prices as TARs with exogenous triggers can allow us to measure how often an asset market is efficient. This methodology uses a different class of models from those used in Chapter 4. The methodology is then applied to the S&P500 and FTSE100 process and it is shown that under the most general model specification, the indices primarily exhibit market efficiency.
Chapter 5 looks deeper into how commodity prices are determined and thereby the main contribution is to the literature on commodity market pricing. By making three important changes to the commodity storage model of William and Wright (1991), we are able to show that our model is able to capture essential features of commodity prices that have not been captured by previous iterations. The numerical solution for the model is obtained using the Parameterized Expectations Algorithm (PEA) and simulated series based on this solution are able to reproduce some statistical features of real commodity price series including a high degree of first order auto-correlation, skewness and kurtosis. A second contribution is with regards to the application of the model; we calibrate the model to match five real commodities and show that the model’s solution is able to match real life data. The model is also able to explain why we observe spikes (bubbles) in commodity prices and cites the impact of storage as a probable contributor. Chapter 6 provides concluding remarks on the dissertation.Higher Education Commission of Pakistan, Cambridge Commonwealth Trus
Primary Wood-Using Mills and Forest Resources: Interactions between Wood Demand and Procurement Areas
It is a common belief that the presence of forest industry and associated wood demand will result in forest management of procurement areas. The following essays examined the relationship between mill demand and procurement areas by assessing the likelihood of forest management and the ability to predict future wood output. The first study investigates the likelihood of forest management given proximity to mills using a multivariate probit model, incorporating forest characteristics and primary wood-using mill information collected by the USDA Forest Service Forest Inventory and Analysis and the Timber Products Output (TPO) survey. The second essay explores the use of vector autoregressive methods to forecast county pulpwood output using pulpwood production data collected by TPO. We evaluated a group of forecasting methods in the vector autoregressive family and compared the models forecast accuracy to that of the commonly used step-forward methodology. Results from the first study indicate that mill proximity has a low impact on private forest landowner management decisions. This information may prove useful to industry and state foresters when dealing with increases in demand arising from new markets, such as bioenergy. Forecasts from the second essay highlight the cross-county differences in terms of pulpwood output in response to national demand. While the macroeconomic series helped predict output activity in some counties, a group of counties displayed no correlation between product output and demand measured by the national variables. The results emphasize the need for disaggregated analysis to capture the dynamics of the procurement areas and primary mills
Bayesian Applications in Empirical Monetary Policy Analysis
Väitöskirjassa tutkitaan Euroopan Keskuspankin yllätyksellisen rahapolitiikkatoimenpiteen dynaamisia vaikutuksia rahaliiton jäsenmaissa. Lisäksi työssä verrataan estimoidun Keynesiläisen dynaamisen stokastisen yleisen tasapainomallin ennustekykyä perinteisiin ennustemalleihin Yhdysvaltojen aineistolla.
Väitöskirjan kolmen ensimmäisen esseen taustalla on olettamus, että rahaliiton jäsenmaiden rahapolitiikan välittymiskanavat ovat erilaiset vaikka jäsenmaat täyttivät EMU-lähentymiskriteerit ajallaan. Tulos jäsenmaiden epäsymmetrisestä reagoinnista Euroopan Keskuspankin harjoittamaan yllätykselliseen rahapolitiikkaan tukisi väittämää, että euroalue ei olisi optimaalinen valuutta-alue. Väitöskirjassa saatujen tulosten valossa voidaan todeta, että yllätyksellinen Euroopan Keskuspankin harjoittama rahapolitiikka aiheuttaa epäsymmetrisiä hintainflaatiosarjojen reagointeja rahaliiton jäsenmaissa.
Dynaamiset stokastiset yleisen tasapainon makromallit tarjoavat mielenkiintoisen työvälineen talouden muuttujien ennustamiseen ja talouspoliittisten toimien analysointiin. Tämän malliluokan mallien empiirisen soveltamisen kompastuskivenä on ollut se, että näiden mallien sovittaminen havaintoaineistoon on osoittautunut hankalaksi. Neljännessä esseessä esitelläänkin helposti sovellettavissa oleva estimointitapa kyseisten mallien estimoimiseksi. Estimoidun tasapainomallin kyky ennustaa Yhdysvaltojen korkoa, inflaatiota ja tuotantoa on hyvä. Saatu tulos on erittäin mielenkiintoinen, sillä alan kirjallisuudessa uskotaan, että vastaavaan ennustekykyyn päädytään kasvattamalla mallikokoa.This thesis investigates effects of sudden movement in monetary policy stance in the euro area and assesses forecasting performance of estimated structural dynamic equilibrium model for the United States data. In the first three essays the focus is on an inspection of the dynamic effects of sudden changes in the monetary policy conduct of the European Central Bank (ECB) in EMU member countries. We propose that asymmetric monetary policy responses would imply that domestic monetary policy transmission mechanisms have not necessarily integrated even if EMU convergence criteria were met on time, and it would be at odds that the euro area constitutes an optimum currency area. The fourth essay assesses the forecasting performance of a modern macro model for U.S. data. The statistical inference of the thesis is Bayesian.
In the first essay, we describe the dynamics of year-on-year consumer price inflation responses to an unanticipated expansionary monetary policy shock in the euro area with a vector autoregressive model (VAR) model. The variables and statistically testable short run restriction schemes ensuring identification are derivable from a new Keynesian macro model. A rather surprising finding is that traditional Cholesky identification is only weakly supported by the data. Impulse responses of year-on-year consumer price inflation to an expansionary monetary policy shock are calculated for a VAR model identified by the most probable identification scheme. In this identification scheme we let EMU member country information to affect simultaneously monetary policy instrument. Obtained results suggest asymmetric year-on-year price inflation responses to monetary policy conducted by the ECB.
In the second essay, we first survey, with help of a variant of the Taylor rule, six information sets on which the ECB most likely bases its monetary policy decisions. Assessment of information sets is an obvious accretion to the literature on the conduct of monetary policy by the ECB. In the analysis we approximate euro area s monetary conditions with an estimated VAR model for the suggested information set and calculate identified impulse responses of the difference in year-on-year producer price inflations in the euro area and few peripheral EMU member countries. According to results an unexpected variation in the monetary policy instrument conditioned on the information pertaining to the three largest EMU member countries (Germany, France and Italy) will have asymmetric effects in year-on-year producer price inflation across the EMU member countries.
In the third essay, we apply a new Keynesian open economy macro model in setting identifying restrictions for impulse response analysis of consumer price inflation in the euro area. The relevance of the implied simultaneous parameter cross-equation restrictions is assessed by posterior estimation of the hyperparameter that measures prior beliefs on identifying restrictions. The posterior evidence suggests that prior beliefs on simultaneous effects of model variables are of relevance while identifying the VAR model with an open economy new Keynesian macro model for the euro area. Contrary to outcome of the first essay, the drawn impulse responses support the claim that an expansionary monetary policy shock would not cause evident asymmetric price inflation responses in EMU member countries. However, the impulse responses from recursively identified VAR-model are in line with the ones reported in the first essay.
The fourth essay evaluates a closed economy, log-linearized 3-variable new Keynesian model with an easily implementable method for the Bayesian analysis. It becomes evident that a small-scale modern macro model can rival commonly used forecasting tools, such as Bayesian VARs and forecasts based on random walks. According to the posterior evidence, the model manages to capture evolutions of U.S. macroeconomic variables, price inflation, short-term nominal interest rate and measure of output gap, fairly well
- …