82 research outputs found

    Three essays on credit supply

    Get PDF
    This thesis consists of three independent essays on credit supply, each addressing different components, including the different impact of credit supply shocks financed through different supply channels, how different credit constraints impact debt structure and productivity, and how it affects their individual and collective exposure over time. Chapter 1: Its conceptual appeal has made the Conditional Value at Risk (CoVaR) one of the most influential systemic risk indicators. Despite its popularity, an outstanding methodological challenge may hamper the CoVaRs’ accuracy in measuring the time-series dimension of systemic risk. The dynamics of the CoVaR are entirely due to the behaviour of the state variables and therefore without their inclusion, the CoVaR would be constant over time. The key contribution of this chapter is to relax the assumption of time-invariant tail dependence between the financial system and each institution’s losses, by allowing the estimated parameters of the model to change over time, in addition to changing over quantiles and different financial institutions. We find that the dynamic component that we introduce does not affect the estimations for the risk of individual financial institutions, but it largely affects estimations of systemic risk which exhibits more procyclicality than the one implied by the standard CoVaR. As expected, larger financial institutions have a higher effect on systemic risk, although they are also shown to be individually more robust. When adding balance sheet data, it introduces additional volatility into our model relative to the standard one. In terms of forecasting, the results depend on the horizon used or the variables included. There is no clear outperformance between either model when we add the balance sheet data, or in the short term (less than 12 weeks). However, our model outperforms the standard one for medium (between 15 and 25 weeks) to long term horizons (between 30 and 40 weeks). Chapter 2: We seek to evaluate the impact of the different segments within the lending sector to the private non-financial sector can have on subsequent GDP growth. We isolate the bank lending channel as one of the main components, and group the remaining ones into a second segment which we classify as market based finance (MBF). We also include the 2 different segments of the borrowing sector, household debt and non-financial firm debt, to compare with the results obtained by the standard model. We debate the main source of these effects, and focus on either credit demand or credit supply shocks, in addition to other alternatives. We find that a rise in bank credit and/or household debt to GDP ratio lowers subsequent GDP growth. The predictive power is large in magnitude and robust across time and space. The bank credit booms and household debt booms are connected to lower interest rate spread environments, as well as periods with better financial conditions. And although the overall impact on subsequent GDP growth is negative, we found contrasting evidence when using the Financial Conditions Index (FCI) as an instrument. This would point to the potential different effects that bank credit and household debt could have on future economic growth (good booms vs bad booms), depending on the underlying cause of the boom. The results and the evidence that we found are more consistent with models where the fundamental source of the changes in household debt or bank credit lie in changes in the credit supply (credit supply shocks), rather than credit demand or other possibilities. This would likely be connected to incorrect expectations formation by lenders and investors (what many authors classify as “credit market sentiment” in the literature), which is an important element in explaining shifts in credit supply. Although credit demand shocks could play an important role in prolonging or amplifying the effects of the booms, it is unlikely that they are the source, as it would lead to results that conflict with empirical evidence. Finally, we find some differences in terms of statistical significance and magnitude in the different scenarios, where the bank credit shows more robustness to different specifications than the household debt. This would imply that there is a significance of the bank credit that goes well beyond the household debt. It would also mean that the main component that generates the boom bust cycle in GDP would be the bank credit, independent of its destination, rather than household debt, independent of its financing. Chapter 3: We construct a dataset at the firm-year level by merging the syndicated loan data, provided by Refinitiv LPC DealScan ("DealScan"), with the firm level data, provided by Center for Research in Security Prices (CRSP)/Compustat Merged Database ("CCM"). We conduct an analysis on firms subjected to different covenants, and find that firms with earnings-based constraints have lower levels of TFP (Total Factor Productivity), and short-term debt, when compared to firms with asset-based constraints. The data also shows that this is connected to an additional negative impact that short-term debt has on the productivity for the firms with earnings based constraints, which does not verify in the firms with asset-based constraints. Both these characteristics are robust to the use of 3 different TFP estimation methods, different subsamples, and additional controls, including age and size of the firm. Thus, we consider a quantitative dynamic stochastic partial equilibrium model, with three main types of firms, distinguished by their constraints, which explores the impact of short-term and long term borrowing on firm’s balance sheets, on the different variables. We construct replications for this theoretical model, and assess the how well it fits our actual data. Our findings show that constraints exert an impact on short-term borrowing, but not on the remaining variables. More specifically, firms that face an earnings-based constraint show lower levels of short-term borrowing, compared with firms that are either unconstrained, or asset-based constraint. The adjustment is made through lower dividend distribution, as can be seen by the lower values of the value function. They also point to the impact being larger for firms with lower productivity shocks, which is in accordance withour empirical findings. Even though that our data shows differences in some of this variables (for example, on long-term debt), these were not robust to some of the controls, including the size of the firm

    Characterizing, optimizing and backtesting metrics of risk

    Get PDF
    Measures of risk and riskmetrics were proposed to quantify the risks people are faced with in financial, statistical, and economic practice. They are widely discussed and studied by literature in the context of financial regulation, insurance, operations research, and statistics. Several major research topics on riskmetrics remain to be important in both academic study and industrial practice. First, characterization, especially axiomatic characterization of riskmetrics, lays essential theoretical foundation of specific classes of riskmetrics about why they are widely adopted in practice and research. It usually involves challenging mathematical approaches and deep practical insights. Second, riskmetrics are used by researchers in optimization as the objective functionals of decision makers. This links riskmetrics to the literature of operations research and decision theory, and leads to wide applications of riskmetrics to portfolio management, robust optimization, and insurance design. Third, relevant statistical models of estimation and hypothesis tests for riskmetrics need to be established to serve for practical risk management and financial regulation. In particular, risk forecasts and backtests of different riskmetrics are always the main concern and challenge for risk managers and financial regulators. In this thesis, we investigate several important questions in characterization, optimization, and backtest for measures of risk with different focuses on establishing theoretical framework and solving practical problems. To offer a comprehensive theoretical toolkit for future study, in Chapter 2, we propose the class of distortion riskmetrics defined through signed Choquet integrals. Distortion riskmetrics include many classic risk measures, deviation measures, and other functionals in the literature of finance and actuarial science. We obtain characterization, finiteness, convexity, and continuity results on general model spaces, extending various results in the existing literature on distortion risk measures and signed Choquet integrals. To explore deeper applications of distortion riskmetrics in optimization problems, in Chapter 3, we study optimization of distortion riskmetrics with distributional uncertainty. One of our central findings is a unifying result that allows us to convert an optimization of a non-convex distortion riskmetric with distributional uncertainty to a convex one, leading to practical tractability. A sufficient condition to the unifying equivalence result is the novel notion of closedness under concentration, a variation of which is also shown to be necessary for the equivalence. Our results include many special cases that are well studied in the optimization literature, including but not limited to optimizing probabilities, Value-at-Risk, Expected Shortfall, Yaari's dual utility, and differences between distortion risk measures, under various forms of distributional uncertainty. We illustrate our theoretical results via applications to portfolio optimization, optimization under moment constraints, and preference robust optimization. In Chapter 4, we study characterization of measures of risk in the context of statistical elicitation. Motivated by recent advances on elicitability of risk measures and practical considerations of risk optimization, we introduce the notions of Bayes pairs and Bayes risk measures. Bayes risk measures are the counterpart of elicitable risk measures, extensively studied in the recent literature. The Expected Shortfall (ES) is the most important coherent risk measure in both industry practice and academic research in finance, insurance, risk management, and engineering. One of our central results is that under a continuity condition, ES is the only class of coherent Bayes risk measures. We further show that entropic risk measures are the only risk measures which are both elicitable and Bayes. Several other theoretical properties and open questions on Bayes risk measures are discussed. In Chapter 5, we further study characterization of measures of risk in insurance design. We study the characterization of risk measures induced by efficient insurance contracts, i.e., those that are Pareto optimal for the insured and the insurer. One of our major results is that we characterize a mixture of the mean and ES as the risk measure of the insured and the insurer, when contracts with deductibles are efficient. Characterization results of other risk measures, including the mean and distortion risk measures, are also presented by linking them to different sets of contracts. In Chapter 6, we focus on a larger class of riskmetrics, cash-subadditive risk measures. We study cash-subadditive risk measures without quasi-convexity. One of our major results is that a general cash-subadditive risk measure can be represented as the lower envelope of a family of quasi-convex and cash-subadditive risk measures. Representation results of cash-subadditive risk measures with some additional properties are also examined. The notion of quasi-star-shapedness, which is a natural analogue of star-shapedness, is introduced and we obtain a corresponding representation result. In Chapter 7, we discuss backtesting riskmetrics. One of the most challenging tasks in risk modeling practice is to backtest ES forecasts provided by financial institutions. To design a model-free backtesting procedure for ES, we make use of the recently developed techniques of e-values and e-processes. Model-free e-statistics are introduced to formulate e-processes for risk measure forecasts, and unique forms of model-free e-statistics for VaR and ES are characterized using recent results on identification functions. For a given model-free e-statistic, optimal ways of constructing the e-processes are studied. The proposed method can be naturally applied to many other risk measures and statistical quantities. We conduct extensive simulation studies and data analysis to illustrate the advantages of the model-free backtesting method, and compare it with the ones in the literature

    ECOS 2012

    Get PDF
    The 8-volume set contains the Proceedings of the 25th ECOS 2012 International Conference, Perugia, Italy, June 26th to June 29th, 2012. ECOS is an acronym for Efficiency, Cost, Optimization and Simulation (of energy conversion systems and processes), summarizing the topics covered in ECOS: Thermodynamics, Heat and Mass Transfer, Exergy and Second Law Analysis, Process Integration and Heat Exchanger Networks, Fluid Dynamics and Power Plant Components, Fuel Cells, Simulation of Energy Conversion Systems, Renewable Energies, Thermo-Economic Analysis and Optimisation, Combustion, Chemical Reactors, Carbon Capture and Sequestration, Building/Urban/Complex Energy Systems, Water Desalination and Use of Water Resources, Energy Systems- Environmental and Sustainability Issues, System Operation/ Control/Diagnosis and Prognosis, Industrial Ecology

    Estimating Dependences and Risk between Gold Prices and S&P500: New Evidences from ARCH,GARCH, Copula and ES-VaR models

    Get PDF
    This thesis examines the correlations and linkages between the stock and commodity in order to quantify the risk present for investors in financial market (stock and commodity) using the Value at Risk measure. The risk assessed in this thesis is losses on investments in stock (S&P500) and commodity (gold prices). The structure of this thesis is based on three empirical chapters. We emphasise the focus by acknowledging the risk factor which is the non-stop fluctuation in the prices of commodity and stock prices. The thesis starts by measuring volatility, then dependence which is the correlation and lastly measure the expected shortfalls and Value at risk (VaR). The research focuses on mitigating the risk using VaR measures and assessing the use of the volatility measures such as ARCH and GARCH and basic VaR calculations, we also measured the correlation using the Copula method. Since, the measures of volatility methods have limitations that they can measure single security at a time, the second empirical chapter measures the interdependence of stock and commodity (S&P500 and Gold Price Index) by investigating the risk transmission involved in investing in any of them and whether the ups and downs in the prices of one effect the prices of the other using the Time Varying copula method. Lastly, the third empirical chapter which is the last chapter, investigates the expected shortfalls and Value at Risk (VaR) between the S&P500 and Gold prices Index using the ES-VaR method proposed by Patton, Ziegel and Chen (2018). Volatility is considered to be the most popular and traditional measure of risk. For which we have used ARCH and GARCH model in our first empirical chapter. However, the problem with volatility is that it does not take into account the direction of an investments’ movement: volatility of stocks is that they suddenly jump higher and investors are not distressed with gains. When we talk about investors for them the risk is about the odds of losing money, after my research and findings VaR is based on the common-sense fact. Hence, investors care about the odds of big losses, VaR answers the question, what is my worst-case scenario? Or simply how much I could lose in a really bad month? The results of the thesis demonstrated that measuring volatility (ARCH GARCH) alone was not sufficient in measuring the risk involved in an investment therefore methodologies such as correlation and VAR demonstrates better results. In terms of measuring the interdependence, the Time Varying Copula is used since the dynamic structure of the de- pendence between the data can be modelled by allowing either the copula function or the dependence parameter to be time varying. Lastly, hybrid model further demonstrates the average return on a risky asset for which Expected Shortfall (ES) along with some quantile dependence and VaR (Value at risk) is utilised. Basel III Accord which is applied in coming years till 2019 focuses more on ES unlike VaR, hence there is little existing work on modelling ES. The thesis focused on the results from the model of Patton, Ziegel and Chen (2018) which is based on the statistical decision theory. Patton, Ziegel and Chen (2018), overcame the problem of elicitability for ES by using ES and VaR jointly and propose the new dynamic model of risk measure. This research adds to the contribution of knowledge that measuring risk by using volatility is not enough for measuring risk, interdependence helps in measuring the dependency of one variable over the other and estimations and inference methods proposed by Patton, Ziegel and Chen (2018) using simulations proposed in ES-VaR model further concludes that ARCH and GARCH or other rolling window models are not enough for determining the risk forecasts. The results suggest, in first empirical chapter we see volatility between Gold prices and S&P500. The second empirical chapter results suggest conditional dependence of the two indexes is strongly time varying. The correlation between the stock is high before 2008. The results further displayed slight stronger bivariate upper tail, which signifies that the conditional dependence of the indexes is influence by positive shocks. The last empirical chapter findings proposed that measuring forecasts using ES-Var model proposed by Patton, Ziegel and Chen (2018) does outer perform forecasts based on univariate GARCH model. Investors want to 10 protect themselves from high losses and ES-VaR model discussed in last chapter would certainly help them to manage their funds properly

    Cumulative Distribution Functions As The Foundation For Probabilistic Models

    Get PDF
    This thesis discusses applications of probabilistic and connectionist models for constructing and training cumulative distribution functions (CDFs). First, it is shown how existing tools from the copula literature can be combined to build probabilistic models. It is found that this simple construction leads to numerical and scalability issues that make training and inference challenging. Next, several innovative ideas, combining neural networks, automatic differentiation and copula functions, introduce how to assemble black-box probabilistic models. The basic building block is a cumulative distribution function that is straightforward to construct, composed of arithmetic operations and nonlinear functions. There is no need to assume any specific parametric probability density function (PDF), making the model flexible and normalisation unnecessary. The only requirement is to design a computational graph that parameterises monotonically non-decreasing functions with a constrained range. Training can be then performed using standard tools from any neural network software library. Finally, factorial hidden Markov models (FHMMs) for sequential data are presented. It is shown how to leverage cumulative distribution functions in the form of the Gaussian copula and amortised stochastic variational method to encode hidden Markov chains coherently. This approach enables efficient learning and inference to model long sequences of high-dimensional data with long-range dependencies. Tackling such complex problems was impossible with the established FHMM approximate inference algorithm. It is empirically verified on several problems that some of the estimators introduced in this work can perform comparably or better than the currently popular models. Especially for tasks requiring tail-area or marginal probabilities that can be read directly from a cumulative distribution function

    Sectoral portfolio optimization by judicious selection of financial ratios via PCA

    Full text link
    Embedding value investment in portfolio optimization models has always been a challenge. In this paper, we attempt to incorporate it by first employing principal component analysis (PCA) sector wise to filter out dominant financial ratios from each sector and thereafter, use the portfolio optimization model incorporating second order stochastic dominance (SSD) criteria to derive the final optimal investment. We consider a total of 11 well known financial ratios corresponding to each sector representing four categories of ratios, namely liquidity, solvency, profitability, and valuation. PCA is then applied sector wise over a period of 10 years from April 2004 to March 2014 to extract dominant ratios from each sector in two ways, one from the component solution and other from each category on the basis of their communalities. The two step Sectoral Portfolio Optimization (SPO) model integrating the SSD criteria in constraints is then utilized to build an optimal portfolio. The strategy formed using the former extracted ratios is termed as PCA-SPO(A) and the latter one as PCA-SPO(B). The results obtained from the proposed strategies are compared with the SPO model and two nominal SSD models, with and without financial ratios for computational study. Empirical performance of proposed strategies is assessed over the period of 6 years from April 2014 to March 2020 using a rolling window scheme with varying out-of-sample time line of 3, 6, 9, 12 and 24 months for S&P BSE 500 market. We observe that the proposed strategy PCA-SPO(B) outperforms all other models in terms of downside deviation, CVaR, VaR, Sortino ratio, Rachev ratio, and STARR ratios over almost all out-of-sample periods. This highlights the importance of value investment where ratios are carefully selected and embedded quantitatively in portfolio selection process.Comment: 26 pages, 12 table

    Forecasting VaR and ES using a joint quantile regression and implications in portfolio allocation

    Full text link
    In this paper we propose a multivariate quantile regression framework to forecast Value at Risk (VaR) and Expected Shortfall (ES) of multiple financial assets simultaneously, extending Taylor (2019). We generalize the Multivariate Asymmetric Laplace (MAL) joint quantile regression of Petrella and Raponi (2019) to a time-varying setting, which allows us to specify a dynamic process for the evolution of both VaR and ES of each asset. The proposed methodology accounts for the dependence structure among asset returns. By exploiting the properties of the MAL distribution, we then propose a new portfolio optimization method that minimizes the portfolio risk and controls for well-known characteristics of financial data. We evaluate the advantages of the proposed approach on both simulated and real data, using weekly returns on three major stock market indices. We show that our method outperforms other existing models and provides more accurate risk measure forecasts compared to univariate ones
    • …
    corecore