3,453 research outputs found
Analysis of operational risk of banks â catastrophe modelling
Nowadays financial institutions due to regulation and internal motivations care more intensively
on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse
operational risk in an own developed simulation model framework. Our approach is based on the
analysis of latent risk process instead of manifest risk process, which widely popular in risk
literature. In our model the latent risk process is a stochastic risk process, so called Ornstein-
Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of âfirst time to hitâ requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future
Modelling Censored Losses Using Splicing: a Global Fit Strategy With Mixed Erlang and Extreme Value Distributions
In risk analysis, a global fit that appropriately captures the body and the
tail of the distribution of losses is essential. Modelling the whole range of
the losses using a standard distribution is usually very hard and often
impossible due to the specific characteristics of the body and the tail of the
loss distribution. A possible solution is to combine two distributions in a
splicing model: a light-tailed distribution for the body which covers light and
moderate losses, and a heavy-tailed distribution for the tail to capture large
losses. We propose a splicing model with a mixed Erlang (ME) distribution for
the body and a Pareto distribution for the tail. This combines the flexibility
of the ME distribution with the ability of the Pareto distribution to model
extreme values. We extend our splicing approach for censored and/or truncated
data. Relevant examples of such data can be found in financial risk analysis.
We illustrate the flexibility of this splicing model using practical examples
from risk measurement
Development of a virtual methodology based on physical and data-driven models to optimize engine calibration
Virtual engine calibration exploiting fully-physical plant models is the most promising solution for the reduction of time and cost of the traditional calibration process based on experimental testing. However, accuracy issues on the estimation of pollutant emissions are still unresolved. In this context, the paper shows how a virtual test rig can be built by combining a fully-physical engine model, featuring predictive combustion and NOx sub-models, with data-driven soot and particle number models. To this aim, a dedicated experimental campaign was carried out on a 1.6 liter EU6 diesel engine. A limited subset of the measured data was used to calibrate the predictive combustion and NOx sub-models. The measured data were also used to develop data-driven models to estimate soot and particulate emissions in terms of Filter Smoke Number (FSN) and Particle Number (PN), respectively. Inputs from engine calibration parameters (e.g., fuel injection timing and pressure) and combustion-related quantities computed by the physical model (e.g., combustion duration), were then merged. In this way, thanks to the combination of the two different datasets, the accuracy of the abovementioned models was improved by 20% for the FSN and 25% for the PN. The coupled physical and data-driven model was then used to optimize the engine calibration (fuel injection, air management) exploiting the Non-dominated Sorting genetic algorithm. The calibration obtained with the virtual methodology was then adopted on the engine test bench. A BSFC improvement of 10 g/kWh and a combustion reduction of 3.0 dB in comparison with the starting calibration was achieved
Recommended from our members
Bayesian inference and failure analysis for risk assessment in quality engineering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonFailure is the state of not achieving a desired or intended goal. Failure analysis
planning in the context of risk assessment is an approach that helps to reduce total
cost, increase production capacity, and produce higher-quality products. One of
the most common issues that businesses confront are defective products. This issue
not only results in monetary loss, but also in a loss of status. Companies must
improve their production quality and reduce the quantity of faulty products in order
to continue operating in a healthy and profitable manner in todayâs very competitive
environment. On the other hand, there is the ongoing COVID-19 pandemic, which
has thrown the worldâs natural order into disarray, and has been designated a Public
Health Emergency of International Concern by the World Health Organization. The
demand for quality control is rapidly increasing. Failure analysis is thus an useful
tool for identifying common failures, their likely causes, and their impact on the
health system, as well as plotting strategies to limit COVID-19 transmission. It is
now more vital than ever to enhance failure analysis methods.
The traditional FMEA (Failure mode and effects analysis) is one of the most
widely used approaches for identifying and classifying failure modes (FMs) and
failure causes (FCs). It is a risk analysis tool for coping with possible failures and is
widely used in the reliability engineering, safety engineering and quality engineering.
To prioritize risks of different failure modes, FMEA uses the risk priority number
(RPN), which is the product of three risk measures: severity (S), occurrence (O) and
detection (D). Traditional FMEA, on the other hand, has drawbacks, such as the
inability to cope with uncertain failure data, such as expert subjective evaluations,
the failure eventsâ conditionality, RPN has a high degree of subjectivity, comparing
various RPNs is challenging, potential errors may be ignored in the conventional
FMEA process, etc. To overcome these limitations, I present an integrated Bayesian approach to FMEA in this thesis.
In this proposed approach, I worked with experts in quality engineering and
used Bayesian inference to estimate the FMEA risk parameters: S, O and D. The
proposed approach is intended to become more practical and less subjective as more
data is added. Bayesian statistics is a statistical theory that is based on the Bayesian
interpretation of probability, which states that probability expresses a degree of
belief or information (knowledge) about an event. Bayesian statistics addresses the
issues with uncertainties found in frequentist statistics, such as the distribution of
contributing factors, the implications of using specific distributions and specifies that
there is some prior probability. A prior can be derived from previous information,
such as previous experiments, but it can also be derived from a trained subject-matter
expertâs purely subjective assessment. Frequentist statistics (also known as classical
statistics) has several limitations, including a lack of uncertainty information in
predictions, no built-in regularisation, and no consideration of prior knowledge. Due
to the availability of powerful computers and new algorithms, Bayesian methods
have seen increased use within statistics in the twenty-first century, and this thesis
highlights the effective use of Bayesian analyses to address the shortcomings of the
current FMEA with the revamped Bayesian FMEA. As a demonstration of the
approach, three case studies are presented.
The first case study is a Bayesian risk assessment approach of the modified SEIR
(susceptible-exposed-infectious-recovered) model for the transmission dynamics of
COVID-19 with an exponentially distributed. The effective reproduction number
is estimated based on laboratory-confirmed cases and death data using Bayesian
inference and analyse the impact of the community spread of COVID-19 across the
United Kingdom. The value of effective reproduction number models the average
number of infections caused by a case of an infectious disease in a population that
includes not only susceptible people. The FMEA is then applied to evaluate the
effectiveness of the action measures taken to manage the COVID-19 pandemic. In
the FMEA, the focus was on COVID-19 infections and therefore the failure mode
is taken as positive cases. The model is applied to COVID-19 data showing the
effectiveness of interventions adopted to control the epidemic by reducing the effective
reproduction number of COVID-19. The risk measures were estimated from the case fatality rate (S), the posterior median of the effective reproduction number (O) and
the current corrective measures used by government policies (D).
The second case study is a Bayesian risk assessment of a coordinate measuring
machine (CMM) process using failure mode, effects and criticality analysis (FMECA)
and an augmented form error model. The form error is defined as the deviation of a
manufactured part from its design or ideal shape, and it is a key characteristic to
evaluate in quality engineering and manufacturing. The form error is presented as
a probabilistic model using symmetric unimodal distributions. Bayesian inference
is then used to identify influence factors associated with the measurement process
due to form error, environmental, human and random effects. A risk assessment is
then performed by combining Bayesian inference, FMECA and conformity testing, to
quantify and minimise the risk of wrong decisions. In the FMECA, the focus was on
CMM measurement process and I identified four major FMs that can occur: probe,
mechanical, environmental and measurement performance failure. Eleven FCs were
also observed, each of which was linked to one of the four FMs. The risk measures
were estimated from the posterior probability of failure causes associated with the
CMM measurement process (O), the severity of a specific consumerâs risk (S) and
the detectability of failures from the posterior standard deviation of the form error
model (D).
The third case study is a Bayesian risk assessment of a CMM measurement
process using an autoregressive (AR) form error model and a combined Fault tree
analysis (FTA) and FMEA approach to predict significant failure modes and causes.
The main idea is to estimate and predict the form error based on CMM data using
Gibbs sampling and analyse the impact of the CMM measurement process on product
conformity testing. The FTA is used to compare the actual and predicted form error
data from the Bayesian AR plot to determine the likelihood of the CMM measurement
process failing using binary data. The acquired binary data is then classified into
four states (true positive, true negative, false positive, and false negative) using
a confusion matrix, which is subsequently utilized to calculate key classification
measures (i.e., error rate, prediction rate, prevalence rate, sensitivity rate, etc). The
classification measures were then used to assess the FMEA risk measures S, O, and
D, which were critical for determining the RPN and making decisions. Analytical and numerical methods are used in all case studies to highlight the
practical implications of our findings and are meant to be practical without complex
computing. The proposed methodologies can find applications in numerous disciplines
and wide quality engineering
Bootstrap-based tolerance intervals for nested two-way random effects models
Variance component, or random effects, models are frequently used by manufacturers to model the variance present in a manufacturing process. By applying tolerance intervals to variance component models, manufacturers are able to set upper and lower limits to monitor the variance within a process. Existing methods for constructing tolerance intervals are constrained by the necessity for data to be normally distributed. Recently, non-parametric bootstrap-based methods were developed by Deyzel (2018) to obtain α-expectation and (α, ÎČ) two-sided tolerance intervals for the two-way nested random effects model. Classical and non-parametric methods for obtaining tolerance intervals for the one way random effects model have been assessed in accordance with Rebafka et al. (2007). The present study assesses and compares classical, Bayesian and non-parametric methods for obtaining tolerance intervals for the two-way nested random effects model under different assumptions of the underlying distribution. Results show that the non-parametric methods provided relatively narrow intervals, and generally retain the nominal content and guarantee levels, regardless of the underlying distributionThesis (MSc) -- Faculty of Science, Mathematical Statistics , 202
Bootstrap-based tolerance intervals for nested two-way random effects models
Variance component, or random effects, models are frequently used by manufacturers to model the variance present in a manufacturing process. By applying tolerance intervals to variance component models, manufacturers are able to set upper and lower limits to monitor the variance within a process. Existing methods for constructing tolerance intervals are constrained by the necessity for data to be normally distributed. Recently, non-parametric bootstrap-based methods were developed by Deyzel (2018) to obtain α-expectation and (α, ÎČ) two-sided tolerance intervals for the two-way nested random effects model. Classical and non-parametric methods for obtaining tolerance intervals for the one way random effects model have been assessed in accordance with Rebafka et al. (2007). The present study assesses and compares classical, Bayesian and non-parametric methods for obtaining tolerance intervals for the two-way nested random effects model under different assumptions of the underlying distribution. Results show that the non-parametric methods provided relatively narrow intervals, and generally retain the nominal content and guarantee levels, regardless of the underlying distributionThesis (MSc) -- Faculty of Science, Mathematical Statistics , 202
- âŠ