3,453 research outputs found

    Analysis of operational risk of banks – catastrophe modelling

    Get PDF
    Nowadays financial institutions due to regulation and internal motivations care more intensively on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse operational risk in an own developed simulation model framework. Our approach is based on the analysis of latent risk process instead of manifest risk process, which widely popular in risk literature. In our model the latent risk process is a stochastic risk process, so called Ornstein- Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of “first time to hit” requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future

    Modelling Censored Losses Using Splicing: a Global Fit Strategy With Mixed Erlang and Extreme Value Distributions

    Full text link
    In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modelling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement

    Development of a virtual methodology based on physical and data-driven models to optimize engine calibration

    Get PDF
    Virtual engine calibration exploiting fully-physical plant models is the most promising solution for the reduction of time and cost of the traditional calibration process based on experimental testing. However, accuracy issues on the estimation of pollutant emissions are still unresolved. In this context, the paper shows how a virtual test rig can be built by combining a fully-physical engine model, featuring predictive combustion and NOx sub-models, with data-driven soot and particle number models. To this aim, a dedicated experimental campaign was carried out on a 1.6 liter EU6 diesel engine. A limited subset of the measured data was used to calibrate the predictive combustion and NOx sub-models. The measured data were also used to develop data-driven models to estimate soot and particulate emissions in terms of Filter Smoke Number (FSN) and Particle Number (PN), respectively. Inputs from engine calibration parameters (e.g., fuel injection timing and pressure) and combustion-related quantities computed by the physical model (e.g., combustion duration), were then merged. In this way, thanks to the combination of the two different datasets, the accuracy of the abovementioned models was improved by 20% for the FSN and 25% for the PN. The coupled physical and data-driven model was then used to optimize the engine calibration (fuel injection, air management) exploiting the Non-dominated Sorting genetic algorithm. The calibration obtained with the virtual methodology was then adopted on the engine test bench. A BSFC improvement of 10 g/kWh and a combustion reduction of 3.0 dB in comparison with the starting calibration was achieved

    Bootstrap-based tolerance intervals for nested two-way random effects models

    Get PDF
    Variance component, or random effects, models are frequently used by manufacturers to model the variance present in a manufacturing process. By applying tolerance intervals to variance component models, manufacturers are able to set upper and lower limits to monitor the variance within a process. Existing methods for constructing tolerance intervals are constrained by the necessity for data to be normally distributed. Recently, non-parametric bootstrap-based methods were developed by Deyzel (2018) to obtain α-expectation and (α, ÎČ) two-sided tolerance intervals for the two-way nested random effects model. Classical and non-parametric methods for obtaining tolerance intervals for the one way random effects model have been assessed in accordance with Rebafka et al. (2007). The present study assesses and compares classical, Bayesian and non-parametric methods for obtaining tolerance intervals for the two-way nested random effects model under different assumptions of the underlying distribution. Results show that the non-parametric methods provided relatively narrow intervals, and generally retain the nominal content and guarantee levels, regardless of the underlying distributionThesis (MSc) -- Faculty of Science, Mathematical Statistics , 202

    Bootstrap-based tolerance intervals for nested two-way random effects models

    Get PDF
    Variance component, or random effects, models are frequently used by manufacturers to model the variance present in a manufacturing process. By applying tolerance intervals to variance component models, manufacturers are able to set upper and lower limits to monitor the variance within a process. Existing methods for constructing tolerance intervals are constrained by the necessity for data to be normally distributed. Recently, non-parametric bootstrap-based methods were developed by Deyzel (2018) to obtain α-expectation and (α, ÎČ) two-sided tolerance intervals for the two-way nested random effects model. Classical and non-parametric methods for obtaining tolerance intervals for the one way random effects model have been assessed in accordance with Rebafka et al. (2007). The present study assesses and compares classical, Bayesian and non-parametric methods for obtaining tolerance intervals for the two-way nested random effects model under different assumptions of the underlying distribution. Results show that the non-parametric methods provided relatively narrow intervals, and generally retain the nominal content and guarantee levels, regardless of the underlying distributionThesis (MSc) -- Faculty of Science, Mathematical Statistics , 202
    • 

    corecore