1,813 research outputs found

    Information and optimisation in investment and risk measurement

    Get PDF
    The thesis explores applications of optimisation in investment management and risk measurement. In investment management the information issues are largely concerned with generating optimal forecasts. It is difficult to get inputs that have the properties they are supposed to have. Thus optimisation is prone to 'Garbage In, Garbage Out', that leads to substantial biases in portfolio selection, unless forecasts are adjusted suitably for estimation error. We consider three case studies where we investigate the impact of forecast error on portfolio performance and examine ways of adjusting for resulting bias. Treynor and Black (1973) first tried to make the best possible use of the information provided by security analysis based on Markovitz (1952) portfolio selection. They established a relationship between the correlation of forecasts, the number of independent securities available and the Sharpe ratio which can be obtained. Their analysis was based on the assumption that the correlation between the forecasts and outcomes is known precisely. In practice, given the low levels of correlation possible, an investor may believe himself to have a different degree of correlation from what he actually has. Using two different metrics we explore how the portfolio performance depends on both the anticipated and realised correlation when these differ. One measure, the Sharpe ratio, captures the efficiency loss, attributed to the change in reward for risk. The other measure, the Generalised Sharpe Ratio (GSR), introduced by Hodges (1997), quantifies the reduction in the welfare of a particular investor due to adopting an inappropriate risk profile. We show that these two metrics, the Sharpe ratio and GSR, complement each other and in combination provide a fair ranking of existing investment opportunities. Using Bayesian adjustment is a popular way of dealing with estimation error in portfolio selection. In a Bayesian implementation, we study how to use non-sample information to infer optimal scaling of unknown forecasts of asset returns in the presence of uncertainty about the quality of our information, and how the efficient use of information affects portfolio decision. Optimal portfolios, derived under full use of information, differ strikingly from those derived from the sample information only; the latter, unlike the former, are highly affected by estimation error and favour several (up to ten) times larger holdings. The impact of estimation error in a dynamic setting is particularly severe because of the complexity of the setting in which it is necessary to have time varying forecasts. We take Brennan, Schwartz and Lagnado's structure (1997) as a specific illustration of a generic problem and investigate the bias in long-term portfolio selection models that comes from optimisation with (unadjusted) parameters estimated from historical data. Using a Monte Carlo simulation analysis, we quantify the degree of bias in the optimisation approach of Brennan, Schwartz and Lagnado. We find that estimated parameters make an investor believe in investment opportunities five times larger than they actually are. Also a mild real time-variation in opportunities inflates wildly when measured with estimated parameters. In the latter part of the thesis we look at slightly less straightforward optimisation applications in risk measurement, which arise in reporting risk. We ask, what is the most efficient way of complying with the rules? In other words, we investigate how to report the smallest exposure within a rule. For this purpose we develop two optimal efficient algorithms that calculate the minimal amount of the position risk required, to cover a firm's open positions and obligations, as required by respective rules in the FSA (Financial Securities Association) Handbook. Both algorithms lead to interesting generalisations

    Copula-based statistical modelling of synoptic-scale climate indices for quantifying and managing agricultural risks in australia

    Get PDF
    Australia is an agricultural nation characterised by one of the most naturally diverse climates in the world, which translates into significant sources of risk for agricultural production and subsequent farm revenues. Extreme climatic events have been significantly affecting large parts of Australia in recent decades, contributing to an increase in the vulnerability of crops, and leading to subsequent higher risk to a large number of agricultural producers. However, attempts at better managing climate related risks in the agricultural sector have confronted many challenges. First, crop insurance products, including classical claim-based and index-based insurance, are among the financial implements that allow exposed individuals to pool resources to spread their risk. The classical claim-based insurance indemnifies according to a claim of crop loss from the insured customer, and so can easily manage idiosyncratic risk, which is the case where the loss occurs independently.Nevertheless, the existence of systemic weather risk (covariate risk), which is the spread of extreme events over locations and times (e.g., droughts and floods), has been identified as the main reason for the failure of private insurance markets, such as the classical multi-peril crop insurance, for agricultural crops. The index-based insurance is appropriate to handle systemic but not idiosyncratic risk. The indemnity payments of the index-based insurance are triggered by a predefined threshold of an index (e.g., rainfall), which is related to such losses. Since the covariate nature of a climatic event, it sanctions the insurers to predict losses and ascertain indemnifications for a huge number of insured customers across a wide geographical area. However, basis risk, which is related to the strength of the relationship between the predefined indices used to estimate the average loss by the insured community and the actual loss of insured assets by an individual, is a major barrier that hinders uptake of the index-based insurance. Clearly, the high basis risk, which is a weak relationship between the index and loss, destroys the willingness of potential customers to purchase this insurance product. Second, the impact of multiple synoptic-scale climate mode indices (e.g., Southern Oscillation Index (SOI) and Indian Ocean Index (IOD)) on precipitation and crop yield is not identical in different spatial locations and at different times or seasons across the Australian continent since the influence of large-scale climate heterogeneous over the different regions. The occurrence, role, and amplitude of synoptic-scale climate modes contributing to the variability of seasonal crop production have shifted in recent decades. These variables generally complicate the climate and crop yield relationship that cannot be captured by traditional modelling and analysis approaches commonly found in published agronomic literature such as linear regression. In addition, the traditional linear analysis is not able to model the nonlinear and asymmetric interdependence between extreme insurance losses, which may occur in the case of systemic risk. Relying on the linear method may lead to the problem that different behaviour may be observed from joint distributions, particularly in the upper and lower regions, with the same correlation coefficient. As a result, the likelihood of extreme insurance losses can be underestimated or overestimated that lead to inaccuracies in the pricing of insurance policies. Another alternative is the use of the multivariate normal distribution, where the joint distribution is uniquely defined using the marginal distributions of variables and their correlation matrix. However, phenomena are not always normally distributed in practice. It is therefore important to develop new, scientifically verified, strategic measures to solve the challenges as mentioned above in order to support mitigating the influences of the climate-related risk in the agricultural sector. Copulas provide an advanced statistical approach to model the joint distribution of multivariate random variables. This technique allows estimating the marginal distributions of individual variables independently with their dependence structures. It is clear that the copula method is superior to the conventional linear regression since it does not require variables have to be normally distributed and their correlation can be either linear or non-linear. This doctoral thesis therefore adopts the advanced copula technique within a statistical modelling framework that aims to model: (1) The compound influence of synoptic-scale climate indices (i.e., SOI and IOD) and climate variables (i.e., precipitation) to develop a probabilistic precipitation forecasting system where the integrated role of different factors that govern precipitation dynamics are considered; (2) The compound influence of synoptic-scale climate indices on wheat yield; (3) The scholastic interdependencies of systemic weather risks where potential adaptation strategies are evaluated accordingly; and (4) The risk-reduction efficiencies of geographical diversifications in wheat farming portfolio optimisation. The study areas are Australia’s agro-ecological (i.e., wheat belt) zones where major seasonal wheat and other cereal crops are grown. The results from the first and second objectives can be used for not only forecasting purposes but also understanding the basis risk in the case of pricing climate index-based insurance products. The third and fourth objectives assess the interactions of drought events across different locations and in different seasons and feasible adaptation tools. The findings of these studies can provide useful information for decision-makers in the agricultural sector. The first study found the significant relationship between SOI, IOD, and precipitation. The results suggest that spring precipitation in Australia, except for the western part, can be probabilistically forecasted three months ahead. It is more interesting that the combination of SOI and IOD as the predictors will improve the performance of the forecast model. Similarly, the second study indicated that the largescale climate indices could provide knowledge of wheat crops up to six months in advance. However, it is noted that the influence of different climate indices varies over locations and times. Furthermore, the findings derived from the third study demonstrated the spatio-temporally stochastic dependence of the drought events. The results also prove that time diversification is potentially more effective in reducing the systemic weather risk compared to spatially diversifying strategy. Finally, the fourth objective revealed that wheat-farming portfolio could be effectively optimised through the geographical diversification. The outcomes of this study will lead to the new application of advanced statistical tools that provide a better understanding of the compound influence of synoptic-scale climatic conditions on seasonal precipitation, and therefore on wheat crops in key regions over the Australian continent. Furthermore, a comprehensive analysis of systemic weather risks performed through advanced copula-statistical models can help improve and develop novel agricultural adaptation strategies in not only the selected study region but also globally, where climate extreme events pose a serious threat to the sustainability and survival of the agricultural industry. Finally, the evaluation of the effectiveness of diversification strategies implemented in this study reveals new evidence on whether the risk pooling methods could potentially mitigate climate risks for the agricultural sector and subsequently, help farmers in prior preparation for uncertain climatic events

    The Causal Effect of Cognitive Abilities on Economic Behavior: Evidence from a Forecasting Task with Varying Cognitive Load

    Get PDF
    We identify the causal effect of cognitive abilities on economic behavior in an experimental setting. Using a forecasting task with varying cognitive load, we identify the causal effect of working memory on subjects' forecasting performance, while also accounting for the effect of other cognitive, personality and demographic characteristics. Addressing the causality is important for understanding the nature of various decision-making errors, as well as for providing reliable policy implications in contexts such as student placement, personnel assignment, and public policy programs designed to augment abilities of the disadvantaged. We further argue that establishing the causality of cognitive abilities is a prerequisite for studying their interaction with financial incentives, with implications for the design of efficient incentive schemes.Cognitive ability, Causality, Experiment, Financial incentives, Performance, Working memory

    Conditioning the information in asset pricing

    Get PDF
    This thesis analyzes different theoretical and empirical aspects related to the use of the information in asset pricing. As a main innovation I extend the asset pricing literature proposing a new highly flexible technique for the estimation of the markets subjective distribution of future returns. Applying this technique to different problems I answer to some long-lasting puzzles present in literature. The contribution of this project to the literature is two-fold: first, in line with the new findings of Ross (2015) but from a fully different prospective I propose a new technique to estimate the market's subjective distribution of future returns using, jointly, stock and options data. Second, after studying the theoretical reason behind the superiority of the proposed technique, I use it for different empirical applications

    Essays on Machine Learning for Risk Analysis in Finance, Insurance and Energy

    Full text link
    [eng] This thesis provides research catalogued in the area of risk assessment. Specifically, it contributes to the fields of international finance and asset pricing in finance, and risk assessment in energy economics and transportation research. We present in this thesis a generalization of the spillover indexes to analyze interconnectedness at firm level, and define the aggregate influence from a sector and a country on a firm. We also discuss which factors are relevant for predicting conditional quantiles across the distribution of returns and present a method for selecting factors based on the investor interests. We study the performance of quantile regression against quantile time-series models. Finally, we present a regression framework which estimates VaR and CTE ensuring noncrossing conditions for various quantile levels, and discuss results on energy and telematics data. Within the financial contagion literature, we aim to provide a better understanding of international spillovers and a method for visualize which country and sector are its main drivers. We show that not all companies are driven by their own country or sector, which should be considered by investors and risk managers when assessing company risk and managing investments. In this paper we show that a large percentage of firms’ stocks are driven by their country. But contrary to the belief where country is the main driver of a company’s return movements, a part depends mainly on its sector. We note that 1) the financial services and energy companies are positioned at the center of the network, and 2) northern and western Europe are highly interconnected, while eastern and southern Europe present lower spillovers. 3) For the British energy firms British Petroleum (BP) and Royal Dutch Shell, we evidence greater spillovers from France than from Great Britain itself. 4) We identify which non-Russian firms are most influenced by Russia, simulating a risk management analysis in the event of of turmoil distresses such as the recent Ukrainian conflict. 5) We show the improvement on spillover information when using individual firm connectedness and aggregating spillovers afterwards against calculating spillovers directly from indexes. 6) We finally show that eastern Europe has increased interconnectedness with the rest of the continent after the Covid-19 pandemic. Regarding the asset pricing literature, we aim to understand the key elements that predict extreme quantile levels of a stock return. We study which factors for a 7-factor asset pricing specification are more relevant for each part of the distributions’ tail. The 7-factor specification is constituted by the factors size, book-to-market, operating profitability, investment, momentum, market beta and liquidity. We present a method to add more factors depending on the investors’ interests. We use quantile regression models for predicting quantile levels 0.05, 0.25, 0.5, 0.75 and 0.95 of the stock returns using cross-sectional characteristics as covariates from the Open Source Cross-Sectional Asset Pricing Dataset (Chen and Zimmermann, 2021). We observe that the factor size changes from positive to negative sign when predicting lower quantiles to higher quantiles. We show that extreme quantile level estimations perform better than the median in terms of pseudo-R2. Regarding factor significance, the variable investment has lower predictive power than other factors in terms of t-statistics for all tested quantile levels. Liquidity gains significance if quantile levels increase. For book-to-market, profitability, momentum and market beta, median predictions of returns are more significant than extreme quantile level estimations. The opposite happens for size, which presents higher relevance for predicting extreme quantile levels of the returns’ distribution. We observe that during crisis periods, some factors lose significance. This is the case of profitability and momentum for quantile levels 0.05 and 0.5, and size, book-to-market and market beta for quantile level 0.5. We add additional factors individually and compare the weighted average pseudo-R2 obtained across all 5 quantile levels. The weighting depends on the strategy that the investor follows. For all strategies tested, the most relevant factors to add to the 7-factor specification are momentum seasonality and net operating assets. Following, for strategies more interested in predicting losers’ tails (left part of the distribution), adding asset growth is recommended, but if the investor is interested in the winners’ tail (right part of the distribution), the recommended factor to add is enterprise multiple. Within the asset pricing literature, we encourage the use of cross-sectional information against time-series factors to predict extreme quantile levels of the right-hand side of the response distribution during periods of high volatility. By using this methodology, we do not restrict the information on panel-like datasets, which allows us to study more companies, and provide estimates for newly added firms. We use quantile regression specification with cross-sectional characteristics obtained from the Open Source Cross-Sectional Asset Pricing Dataset (Chen and Zimmermann, 2021) and compare results against a CAViaR (Engle and Manganelli, 2004) specification. Fama and French (2020) evidence that the average returns are better explained by using cross-sectional factors than by using time-series factors. We show that this only applies on extreme quantile levels during high volatility periods. We show that individual firm Hits (exceedances above VaR) calculated using time-series models tend to accumulate, while using cross-sectional data we avoid concentrations. We show that cross-sectional information improves the prediction of Value-at-Risk (VaR) and Conditional Tail Expectations (CTE). We finally discuss changes on capital requirements for a firm. In general, by using cross-sectional information, capital requirements should be increased from when time-series information is used. During turmoil periods the opposite happens: capital requirements should decrease compared to when using the CAViaR specification. Inside the area of non-crossing quantiles, we define the non-crossing property for VaR and CTE for several quantile levels. We define a regression framework based on neural networks that creates an environment for predicting VaR and CTE for several quantile levels while asserting non-crossing conditions. The proposed neural network predicts VaR and CTE as positive excesses of the previous VaR and CTE. We prove that this definition satisfies the non-crossing property and show its improvement against the Monotone Composite Quantile Regression Neural Network (Cannon, 2018) and a quantile regression and CTE linear approach on an energy consumption and telematic datasets. We show the estimation improvements on extreme quantile levels of the right part of the distribution against the other tested models by using Murphy diagrams (Ehm et al., 2016). We present examples with crossing predictions to demonstrate the infeasibility of such results in a business context, which we overcome using the proposed model

    The Time Series Behavior of Intradaily Stock Prices.

    Get PDF
    This dissertation investigates the time-series properties of intradaily stock prices. It provides a model of the return generating process that is capable of incorporating not only such institutional constraints as the specialist\u27s bid-ask spread, but also the presence of dependence in the conditional variance. It extends the literature by introducing a conditional error distribution, the power-exponential, that adequately accounts for not only leptokurtosis but also peakedness in the empirical distribution. Evidence is presented that suggests intradaily returns are best modelled as a mixture of distributions. Furthermore, it documents the inability of information proxies, such as volume or the number of trades, to account for the presence of autoregressive conditional heteroscedasticity in the data. And lastly, it examines the robustness of variance ratio statistics to test the null hypothesis of a random walk in the presence of higher order dependence

    Exploring the Law of Numbers: Evidence from China's Real Estate

    Full text link
    The renowned proverb, Numbers do not lie, underscores the reliability and insight that lie beneath numbers, a concept of undisputed importance, especially in economics and finance etc. Despite the prosperity of Benford's Law in the first digit analysis, its scope fails to remain comprehensiveness when it comes to deciphering the laws of number. This paper delves into number laws by taking the financial statements of China real estate as a representative, quantitatively study not only the first digit, but also depict the other two dimensions of numbers: frequency and length. The research outcomes transcend mere reservations about data manipulation and open the door to discussions surrounding number diversity and the delineation of the usage insights. This study wields both economic significance and the capacity to foster a deeper comprehension of numerical phenomena.Comment: DS

    Modelling for Pest Risk Analysis: Spread and Economic Impacts

    No full text
    The introduction of invasive pests beyond their natural range is one of the main causes of the loss of biodiversity and leads to severe costs. Bioeconomic models that integrate biological invasion spread theory, economic impacts and invasion management would be of great help to increase the transparency of pest risk analysis (PRA) and provide for more effective and efficient management of invasive pests. In this thesis, bioeconomic models of management of invasive pests are developed. The models are applied to three cases of study. The main case looks at the invasion in Europe by the western corn rootworm (WCR), Diabrotica virgifera ssp. virgifera LeConte (Coleoptera: Chrysomelidae). A range of quantitative modelling approaches was employed: (i) dispersal kernels fitted to mark-release-recapture experimental data; (ii) optimal control models combined with info-gap theory; (iii) spatially explicit stochastic simulation models; and (iv) agent-based models. As a result of the application of the models new insights on the management of invasive pests and the links between spread and economic impacts were gained: (i) current official management measures to eradicate WCR were found to be ineffective; (ii) eradication and containment programmes that are economically optimal under no uncertainty were found out to be also the most robustly immune policy to unacceptable outcomes under severe uncertainty; (iii) PRA focusing on single invasive pests might lead to management alternatives that dot not correspond to the optimal economic allocation if the rest of the invasive pests sharing the same management budget are considered; (iv) the control of satellite colonies of an invasion occurring by stratified dispersal is ineffective when a strong propagule pressure is generated from the main body of the invasion and this effect is increased by the presence of human-assisted long-distance dispersal; and (v) agent-based models were shown to be an adequate tool to integrate biological invasion spread models with economic analysis models

    Information and optimisation in investment and risk measurement

    Get PDF
    The thesis explores applications of optimisation in investment management and risk measurement. In investment management the information issues are largely concerned with generating optimal forecasts. It is difficult to get inputs that have the properties they are supposed to have. Thus optimisation is prone to 'Garbage In, Garbage Out', that leads to substantial biases in portfolio selection, unless forecasts are adjusted suitably for estimation error. We consider three case studies where we investigate the impact of forecast error on portfolio performance and examine ways of adjusting for resulting bias. Treynor and Black (1973) first tried to make the best possible use of the information provided by security analysis based on Markovitz (1952) portfolio selection. They established a relationship between the correlation of forecasts, the number of independent securities available and the Sharpe ratio which can be obtained. Their analysis was based on the assumption that the correlation between the forecasts and outcomes is known precisely. In practice, given the low levels of correlation possible, an investor may believe himself to have a different degree of correlation from what he actually has. Using two different metrics we explore how the portfolio performance depends on both the anticipated and realised correlation when these differ. One measure, the Sharpe ratio, captures the efficiency loss, attributed to the change in reward for risk. The other measure, the Generalised Sharpe Ratio (GSR), introduced by Hodges (1997), quantifies the reduction in the welfare of a particular investor due to adopting an inappropriate risk profile. We show that these two metrics, the Sharpe ratio and GSR, complement each other and in combination provide a fair ranking of existing investment opportunities. Using Bayesian adjustment is a popular way of dealing with estimation error in portfolio selection. In a Bayesian implementation, we study how to use non-sample information to infer optimal scaling of unknown forecasts of asset returns in the presence of uncertainty about the quality of our information, and how the efficient use of information affects portfolio decision. Optimal portfolios, derived under full use of information, differ strikingly from those derived from the sample information only; the latter, unlike the former, are highly affected by estimation error and favour several (up to ten) times larger holdings. The impact of estimation error in a dynamic setting is particularly severe because of the complexity of the setting in which it is necessary to have time varying forecasts. We take Brennan, Schwartz and Lagnado's structure (1997) as a specific illustration of a generic problem and investigate the bias in long-term portfolio selection models that comes from optimisation with (unadjusted) parameters estimated from historical data. Using a Monte Carlo simulation analysis, we quantify the degree of bias in the optimisation approach of Brennan, Schwartz and Lagnado. We find that estimated parameters make an investor believe in investment opportunities five times larger than they actually are. Also a mild real time-variation in opportunities inflates wildly when measured with estimated parameters. In the latter part of the thesis we look at slightly less straightforward optimisation applications in risk measurement, which arise in reporting risk. We ask, what is the most efficient way of complying with the rules? In other words, we investigate how to report the smallest exposure within a rule. For this purpose we develop two optimal efficient algorithms that calculate the minimal amount of the position risk required, to cover a firm's open positions and obligations, as required by respective rules in the FSA (Financial Securities Association) Handbook. Both algorithms lead to interesting generalisations.EThOS - Electronic Theses Online ServiceOverseas Research Students Fees Support Scheme : University of WarwickGBUnited Kingdo

    Risk and investment management in liberalized electricity markets

    Get PDF
    • …
    corecore