18,160 research outputs found

    Double Whammy - How ICT Projects are Fooled by Randomness and Screwed by Political Intent

    Get PDF
    The cost-benefit analysis formulates the holy trinity of objectives of project management - cost, schedule, and benefits. As our previous research has shown, ICT projects deviate from their initial cost estimate by more than 10% in 8 out of 10 cases. Academic research has argued that Optimism Bias and Black Swan Blindness cause forecasts to fall short of actual costs. Firstly, optimism bias has been linked to effects of deception and delusion, which is caused by taking the inside-view and ignoring distributional information when making decisions. Secondly, we argued before that Black Swan Blindness makes decision-makers ignore outlying events even if decisions and judgements are based on the outside view. Using a sample of 1,471 ICT projects with a total value of USD 241 billion - we answer the question: Can we show the different effects of Normal Performance, Delusion, and Deception? We calculated the cumulative distribution function (CDF) of (actual-forecast)/forecast. Our results show that the CDF changes at two tipping points - the first one transforms an exponential function into a Gaussian bell curve. The second tipping point transforms the bell curve into a power law distribution with the power of 2. We argue that these results show that project performance up to the first tipping point is politically motivated and project performance above the second tipping point indicates that project managers and decision-makers are fooled by random outliers, because they are blind to thick tails. We then show that Black Swan ICT projects are a significant source of uncertainty to an organisation and that management needs to be aware of

    A sequential sampling strategy for extreme event statistics in nonlinear dynamical systems

    Full text link
    We develop a method for the evaluation of extreme event statistics associated with nonlinear dynamical systems, using a small number of samples. From an initial dataset of design points, we formulate a sequential strategy that provides the 'next-best' data point (set of parameters) that when evaluated results in improved estimates of the probability density function (pdf) for a scalar quantity of interest. The approach utilizes Gaussian process regression to perform Bayesian inference on the parameter-to-observation map describing the quantity of interest. We then approximate the desired pdf along with uncertainty bounds utilizing the posterior distribution of the inferred map. The 'next-best' design point is sequentially determined through an optimization procedure that selects the point in parameter space that maximally reduces uncertainty between the estimated bounds of the pdf prediction. Since the optimization process utilizes only information from the inferred map it has minimal computational cost. Moreover, the special form of the metric emphasizes the tails of the pdf. The method is practical for systems where the dimensionality of the parameter space is of moderate size, i.e. order O(10). We apply the method to estimate the extreme event statistics for a very high-dimensional system with millions of degrees of freedom: an offshore platform subjected to three-dimensional irregular waves. It is demonstrated that the developed approach can accurately determine the extreme event statistics using limited number of samples

    Extreme event quantification in dynamical systems with random components

    Get PDF
    A central problem in uncertainty quantification is how to characterize the impact that our incomplete knowledge about models has on the predictions we make from them. This question naturally lends itself to a probabilistic formulation, by making the unknown model parameters random with given statistics. Here this approach is used in concert with tools from large deviation theory (LDT) and optimal control to estimate the probability that some observables in a dynamical system go above a large threshold after some time, given the prior statistical information about the system's parameters and/or its initial conditions. Specifically, it is established under which conditions such extreme events occur in a predictable way, as the minimizer of the LDT action functional. It is also shown how this minimization can be numerically performed in an efficient way using tools from optimal control. These findings are illustrated on the examples of a rod with random elasticity pulled by a time-dependent force, and the nonlinear Schr\"odinger equation (NLSE) with random initial conditions

    Quantifying Systemic Risk

    Get PDF

    Data-Driven Methods and Applications for Optimization under Uncertainty and Rare-Event Simulation

    Full text link
    For most of decisions or system designs in practice, there exist chances of severe hazards or system failures that can be catastrophic. The occurrence of such hazards is usually uncertain, and hence it is important to measure and analyze the associated risks. As a powerful tool for estimating risks, rare-event simulation techniques are used to improve the efficiency of the estimation when the risk occurs with an extremely small probability. Furthermore, one can utilize the risk measurements to achieve better decisions or designs. This can be achieved by modeling the task into a chance constrained optimization problem, which optimizes an objective with a controlled risk level. However, recent problems in practice have become more data-driven and hence brought new challenges to the existing literature in these two domains. In this dissertation, we will discuss challenges and remedies in data-driven problems for rare-event simulation and chance constrained problems. We propose a robust optimization based framework for approaching chance constrained optimization problems under a data-driven setting. We also analyze the impact of tail uncertainty in data-driven rare-event simulation tasks. On the other hand, due to recent breakthroughs in machine learning techniques, the development of intelligent physical systems, e.g. autonomous vehicles, have been actively investigated. Since these systems can cause catastrophes to public safety, the evaluation of their machine learning components and system performance is crucial. This dissertation will cover problems arising in the evaluation of such systems. We propose an importance sampling scheme for estimating rare events defined by machine learning predictors. Lastly, we discuss an application project in evaluating the safety of autonomous vehicle driving algorithms.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163270/1/zhyhuang_1.pd

    Basel II and Operational Risk: Implications for risk measurement and management in the financial sector

    Get PDF
    This paper proposes a methodology to analyze the implications of the Advanced Measurement Approach (AMA) for the assessment of operational risk put forward by the Basel II Accord. The methodology relies on an integrated procedure for the construction of the distribution of aggregate losses, using internal and external loss data. It is illustrated on a 2x2 matrix of two selected business lines and two event types, drawn from a database of 3000 losses obtained from a large European banking institution. For each cell, the method calibrates three truncated distributions functions for the body of internal data, the tail of internal data, and external data. When the dependence structure between aggregate losses and the non-linear adjustment of external data are explicitly taken into account, the regulatory capital computed with the AMA method proves to be substantially lower than with less sophisticated approaches allowed by the Basel II Accord, although the effect is not uniform for all business lines and event types. In a second phase, our models are used to estimate the effects of operational risk management actions on bank profitability, through a measure of RAROC adapted to operational risk. The results suggest that substantial savings can be achieved through active management techniques, although the estimated effect of a reduction of the number, frequency or severity of operational losses crucially depends on the calibration of the aggregate loss distributions.operational risk management, basel II, advanced measurement approach, copulae, external data, EVT, RAROC, cost-benefit analysis.

    Characterization of the frequency of extreme events by the Generalized Pareto Distribution

    Full text link
    Based on recent results in extreme value theory, we use a new technique for the statistical estimation of distribution tails. Specifically, we use the Gnedenko-Pickands-Balkema-de Haan theorem, which gives a natural limit law for peak-over-threshold values in the form of the Generalized Pareto Distribution (GPD). Useful in finance, insurance, hydrology, we investigate here the earthquake energy distribution described by the Gutenberg-Richter seismic moment-frequency law and analyze shallow earthquakes (depth h < 70 km) in the Harvard catalog over the period 1977-2000 in 18 seismic zones. The whole GPD is found to approximate the tails of the seismic moment distributions quite well above moment-magnitudes larger than mW=5.3 and no statistically significant regional difference is found for subduction and transform seismic zones. We confirm that the b-value is very different in mid-ocean ridges compared to other zones (b=1.50=B10.09 versus b=1.00=B10.05 corresponding to a power law exponent close to 1 versus 2/3) with a very high statistical confidence. We propose a physical mechanism for this, contrasting slow healing ruptures in mid-ocean ridges with fast healing ruptures in other zones. Deviations from the GPD at the very end of the tail are detected in the sample containing earthquakes from all major subduction zones (sample size of 4985 events). We propose a new statistical test of significance of such deviations based on the bootstrap method. The number of events deviating from the tails of GPD in the studied data sets (15-20 at most) is not sufficient for determining the functional form of those deviations. Thus, it is practically impossible to give preference to one of the previously suggested parametric families describing the ends of tails of seismic moment distributions.Comment: pdf document of 21 pages + 2 tables + 20 figures (ps format) + one file giving the regionalizatio
    • …
    corecore