629,654 research outputs found

    Value-at-risk (Var) Application at Hypothetical Portfolios in Jakarta Islamic Index

    Full text link
    The paper is an exploratory study to apply the method of historical simulation based on the concept of Value at Risk on hypothetical portfolios on Jakarta Islamic Index (JII). Value at Risk is a tool to measure a portfolio's exposure to market risk. We construct four portfolios based on the frequencies of the companies in Jakarta Islamic Index on the period of 1 January 2008 to 2 August 2010. The portfolio A has 12 companies, Portfolio B has 9 companies, portfolio C has 6 companies and portfolio D has 4 companies. We put the initial investment equivalent to USD 100 and use the rate of 1 USD=Rp 9500. The result of historical simulation applied in the four portfolios shows significant increasing risk on the year 2008 compared to 2009 and 2010. The bigger number of the member in one portfolio also affects the VaR compared to smaller member. The level of confidence 99% also shows bigger loss compared to 95%. The historical simulation shows the simplest method to estimate the event of increasing risk in Jakarta Islamic Index during the Global Crisis 2008

    Models for Forecasting Value at Risk: A comparison of the predictive ability of different VaR models to capture market losses incurred during the 2020 pandemic recession

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and ManagementThe purpose of this study is two-fold. First, it aims at providing a theoretical overview of the most widely adopted methods for forecasting Value-at-Risk (VaR). Second, through a practical implementation, it proposes a methodology to compare and evaluate the predictive ability of different parametric, non-parametric and semi-parametric models to capture the market losses incurred during the COVID-19 pandemic recession of 2020. To evaluate these models, it is applied a two-staged backtesting procedure based on accuracy statistical tests and loss functions. VaR forecasts are evaluated during a volatile and a stable forecasting periods. The results of the study suggest that, for the volatile period, the Extreme Value Theory with a peaks over threshold (EVT-POT) approach produces the most accurate VaR forecasts across all different methodologies. The Filtered Historical Simulation (FHS), Volatility Weighted Historical Simulation (VWHS) and the Glosten, Jagannathan and Runkle (GJR) GARCH with skewed generalized error distribution (GJR GARCH–SGED) models also produce satisfactory forecasts. Moreover, other parametric approaches, namely the GARCH and EWMA, despite less accurate, also produce reliable results. Furthermore, the overall performance of all models improves significantly during the stable forecasting period. For instance, the Historical Simulation with exponentially decreasing weights (BRW HS), one of the worst performers during the volatile forecasting period, produces the most accurate VaR forecasts, with the lowest penalty scores, during the stable forecasting period. Lastly, it was also found that as the level of conservativeness of the model increases, the overestimation of the actual incurred risk seems to a be recurrent event

    Defining the hundred year flood: a Bayesian approach for using historic data to reduce uncertainty in flood frequency estimates

    Get PDF
    This paper describes a Bayesian statistical model for estimating flood frequency by combining uncertain annual maximum (AMAX) data from a river gauge with estimates of flood peak discharge from various historic sources that predate the period of instrument records. Such historic flood records promise to expand the time series data needed for reducing the uncertainty in return period estimates for extreme events, but the heterogeneity and uncertainty of historic records make them difficult to use alongside Flood Estimation Handbook and other standard methods for generating flood frequency curves from gauge data. Using the flow of the River Eden in Carlisle, Cumbria, UK as a case study, this paper develops a Bayesian model for combining historic flood estimates since 1800 with gauge data since 1967 to estimate the probability of low frequency flood events for the area taking account of uncertainty in the discharge estimates. Results show a reduction in 95% confidence intervals of roughly 50% for annual exceedance probabilities of less than 0.0133 (return periods over 75 years) compared to standard flood frequency estimation methods using solely systematic data. Sensitivity analysis shows the model is sensitive to 2 model parameters both of which are concerned with the historic (pre-systematic) period of the time series. This highlights the importance of adequate consideration of historic channel and floodplain changes or possible bias in estimates of historic flood discharges. The next steps required to roll out this Bayesian approach for operational flood frequency estimation at other sites is also discussed

    The price of risk in construction projects: contingency approximation model (CAM)

    Get PDF
    Little attention has been focussed on a precise definition and evaluation mechanism for project management risk specifically related to contractors. When bidding, contractors traditionally price risks using unsystematic approaches. The high business failure rate our industry records may indicate that the current unsystematic mechanisms contractors use for building up contingencies may be inadequate. The reluctance of some contractors to include a price for risk in their tenders when bidding for work competitively may also not be a useful approach. Here, instead, we first define the meaning of contractor contingency, and then we develop a facile quantitative technique that contractors can use to estimate a price for project risk. This model will help contractors analyse their exposure to project risks; and help them express the risk in monetary terms for management action. When bidding for work, they can decide how to allocate contingencies strategically in a way that balances risk and reward

    Multi-hazard risk assessment using GIS in urban areas: a case study for the city of Turrialba, Costa Rica

    Get PDF
    In the framework of the UNESCO sponsored project on “Capacity Building for Natural Disaster Reduction” a case study was carried out on multi-hazard risk assessment of the city of Turrialba, located in the central part of Costa Rica. The city with a population of 33,000 people is located in an area, which is regularly affected by flooding, landslides and earthquakes. In order to assist the local emergency commission and the municipality, a pilot study was carried out in the development of a GIS –based system for risk assessment and management. The work was made using an orthophoto as basis, on which all buildings, land parcels and roads, within the city and its direct surroundings were digitized, resulting in a digital parcel map, for which a number of hazard and vulnerability attributes were collected in the field. Based on historical information a GIS database was generated, which was used to generate flood depth maps for different return periods. For determining the seismic hazard a modified version of the Radius approach was used and the landslide hazard was determined based on the historical landslide inventory and a number of factor maps, using a statistical approach. The cadastral database of the city was used, in combination with the various hazard maps for different return periods to generate vulnerability maps for the city. In order to determine cost of the elements at risk, differentiation was made between the costs of the constructions and the costs of the contents of the buildings. The cost maps were combined with the vulnerability maps and the hazard maps per hazard type for the different return periods, in order to obtain graphs of probability versus potential damage. The resulting database can be a tool for local authorities to determine the effect of certain mitigation measures, for which a cost-benefit analysis can be carried out. The database also serves as an important tool in the disaster preparedness phase of disaster management at the municipal level

    The end of the waterfall: Default resources of central counterparties

    Get PDF
    Central counterparties (CCPs) have become pillars of the new global financial architecture following the financial crisis of 2008. The key role of CCPs in mitigating counterparty risk and contagion has in turn cast them as systemically important financial institutions whose eventual failure may lead to potentially serious consequences for financial stability, and prompted discussions on CCP risk management standards and safeguards for recovery and resolutions of CCPs in case of failure. We contribute to the debate on CCP default resources by focusing on the incentives generated by the CCP loss allocation rules for the CCP and its members and discussing how the design of loss allocation rules may be used to align these incentives in favor of outcomes which benefit financial stability. After reviewing the ingredients of the CCP loss waterfall and various proposals for loss recovery provisions for CCPs, we examine the risk management incentives created by different ingredients in the loss waterfall and discuss possible approaches for validating the design of the waterfall. We emphasize the importance of CCP stress tests and argue that such stress tests need to account for the interconnectedness of CCPs through common members and cross-margin agreements. A key proposal is that capital charges on assets held against CCP Default Funds should depend on the quality of the risk management of the CCP, as assessed through independent stress tests

    How Should We Prioritise Incident Management Deployment?

    Get PDF
    With perpetual strains on resources and traffic increasing at a steady rate, transport agencies need to evaluate the road network and make informed decisions to determine which roads have the greatest risk of adverse impacts and therefore identify the roads that have the greatest case for intervention. This is especially the case for Intelligent Transport Systems (ITS) and in particular incident management services where decision-making techniques are immature relative to conventional road engineering. This problem is compounded by the fact that common evaluation tools are insufficient for ITS applications. Historical information for ITS impacts is not always readily available, impacts are not transferable and impacts are incremental to the individual user. For these reasons, a new network evaluation framework is presented in this paper for incident management deployment. The framework aims to analyse the road network and prioritise roads with respect to two factors: the historical risk associated with incidents; and the cost effectiveness of implementation. To assess the historical risk, the framework initially converts social, economic and environmental impacts to a common monetary base, enabling the addition of the incident impacts. The economic impact values must be treated as relative values of measurement, not absolute costs. The second part of the framework assesses the historical risk, taking into account both the consequence of an event, measured in economic terms described above, and the probability of an event occurring based on historical information. The third uses a cost-effective ratio comparing the reduced impacts with the project costs. The economic risk analysis presented in Figure 1 below integrates safety, reliability and environmental impacts, providing an integrated decision-making tool for proactive ITS deployment decision-making

    Unstable Slope Management Program

    Get PDF
    INE/AUTC 11.1

    Estimating the historical and future probabilities of large terrorist events

    Full text link
    Quantities with right-skewed distributions are ubiquitous in complex social systems, including political conflict, economics and social networks, and these systems sometimes produce extremely large events. For instance, the 9/11 terrorist events produced nearly 3000 fatalities, nearly six times more than the next largest event. But, was this enormous loss of life statistically unlikely given modern terrorism's historical record? Accurately estimating the probability of such an event is complicated by the large fluctuations in the empirical distribution's upper tail. We present a generic statistical algorithm for making such estimates, which combines semi-parametric models of tail behavior and a nonparametric bootstrap. Applied to a global database of terrorist events, we estimate the worldwide historical probability of observing at least one 9/11-sized or larger event since 1968 to be 11-35%. These results are robust to conditioning on global variations in economic development, domestic versus international events, the type of weapon used and a truncated history that stops at 1998. We then use this procedure to make a data-driven statistical forecast of at least one similar event over the next decade.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS614 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore