5,373 research outputs found

    Modeling operational risk data reported above a time-varying threshold

    Full text link
    Typically, operational risk losses are reported above a threshold. Fitting data reported above a constant threshold is a well known and studied problem. However, in practice, the losses are scaled for business and other factors before the fitting and thus the threshold is varying across the scaled data sample. A reporting level may also change when a bank changes its reporting policy. We present both the maximum likelihood and Bayesian Markov chain Monte Carlo approaches to fitting the frequency and severity loss distributions using data in the case of a time varying threshold. Estimation of the annual loss distribution accounting for parameter uncertainty is also presented

    A Bayesian spatial random effects model characterisation of tumour heterogeneity implemented using Markov chain Monte Carlo (MCMC) simulation

    Get PDF
    The focus of this study is the development of a statistical modelling procedure for characterising intra-tumour heterogeneity, motivated by recent clinical literature indicating that a variety of tumours exhibit a considerable degree of genetic spatial variability. A formal spatial statistical model has been developed and used to characterise the structural heterogeneity of a number of supratentorial primitive neuroecto-dermal tumours (PNETs), based on diffusionweighted magnetic resonance imaging. Particular attention is paid to the spatial dependence of diffusion close to the tumour boundary, in order to determine whether the data provide statistical evidence to support the proposition that water diffusivity in the boundary region of some tumours exhibits a deterministic dependence on distance from the boundary, in excess of an underlying random 2D spatial heterogeneity in diffusion. Tumour spatial heterogeneity measures were derived from the diffusion parameter estimates obtained using a Bayesian spatial random effects model. The analyses were implemented using Markov chain Monte Carlo (MCMC) simulation. Posterior predictive simulation was used to assess the adequacy of the statistical model. The main observations are that the previously reported relationship between diffusion and boundary proximity remains observable and achieves statistical significance after adjusting for an underlying random 2D spatial heterogeneity in the diffusion model parameters. A comparison of the magnitude of the boundary-distance effect with the underlying random 2D boundary heterogeneity suggests that both are important sources of variation in the vicinity of the boundary. No consistent pattern emerges from a comparison of the boundary and core spatial heterogeneity, with no indication of a consistently greater level of heterogeneity in one region compared with the other. The results raise the possibility that DWI might provide a surrogate marker of intra-tumour genetic regional heterogeneity, which would provide a powerful tool with applications in both patient management and in cancer research

    Malware in the Future? Forecasting of Analyst Detection of Cyber Events

    Full text link
    There have been extensive efforts in government, academia, and industry to anticipate, forecast, and mitigate cyber attacks. A common approach is time-series forecasting of cyber attacks based on data from network telescopes, honeypots, and automated intrusion detection/prevention systems. This research has uncovered key insights such as systematicity in cyber attacks. Here, we propose an alternate perspective of this problem by performing forecasting of attacks that are analyst-detected and -verified occurrences of malware. We call these instances of malware cyber event data. Specifically, our dataset was analyst-detected incidents from a large operational Computer Security Service Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on automated systems. Our data set consists of weekly counts of cyber events over approximately seven years. Since all cyber events were validated by analysts, our dataset is unlikely to have false positives which are often endemic in other sources of data. Further, the higher-quality data could be used for a number for resource allocation, estimation of security resources, and the development of effective risk-management strategies. We used a Bayesian State Space Model for forecasting and found that events one week ahead could be predicted. To quantify bursts, we used a Markov model. Our findings of systematicity in analyst-detected cyber attacks are consistent with previous work using other sources. The advanced information provided by a forecast may help with threat awareness by providing a probable value and range for future cyber events one week ahead. Other potential applications for cyber event forecasting include proactive allocation of resources and capabilities for cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs. Enhanced threat awareness may improve cybersecurity.Comment: Revised version resubmitted to journa

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Modelo de apoio à decisão para a manutenção condicionada de equipamentos produtivos

    Get PDF
    Doctoral Thesis for PhD degree in Industrial and Systems EngineeringIntroduction: This thesis describes a methodology to combine Bayesian control chart and CBM (Condition-Based Maintenance) for developing a new integrated model. In maintenance management, it is a challenging task for decision-maker to conduct an appropriate and accurate decision. Proper and well-performed CBM models are beneficial for maintenance decision making. The integration of Bayesian control chart and CBM is considered as an intelligent model and a suitable strategy for forecasting items failures as well as allow providing an effectiveness maintenance cost. CBM models provides lower inventory costs for spare parts, reduces unplanned outage, and minimize the risk of catastrophic failure, avoiding high penalties associated with losses of production or delays, increasing availability. However, CBM models need new aspects and the integration of new type of information in maintenance modeling that can improve the results. Objective: The thesis aims to develop a new methodology based on Bayesian control chart for predicting failures of item incorporating simultaneously two types of data: key quality control measurement and equipment condition parameters. In other words, the project research questions are directed to give the lower maintenance costs for real process control. Method: The mathematical approach carried out in this study for developing an optimal Condition Based Maintenance policy included the Weibull analysis for verifying the Markov property, Delay time concept used for deterioration modeling and PSO and Monte Carlo simulation. These models are used for finding the upper control limit and the interval monitoring that minimizes the (maintenance) cost function. Result: The main contribution of this thesis is that the proposed model performs better than previous models in which the hypothesis of using simultaneously data about condition equipment parameters and quality control measurements improve the effectiveness of integrated model Bayesian control chart for Condition Based Maintenance.Introdução: Esta tese descreve uma metodologia para combinar Bayesian control chart e CBM (Condition- Based Maintenance) para desenvolver um novo modelo integrado. Na gestão da manutenção, é importante que o decisor possa tomar decisões apropriadas e corretas. Modelos CBM bem concebidos serão muito benéficos nas tomadas de decisão sobre manutenção. A integração dos gráficos de controlo Bayesian e CBM é considerada um modelo inteligente e uma estratégica adequada para prever as falhas de componentes bem como produzir um controlo de custos de manutenção. Os modelos CBM conseguem definir custos de inventário mais baixos para as partes de substituição, reduzem interrupções não planeadas e minimizam o risco de falhas catastróficas, evitando elevadas penalizações associadas a perdas de produção ou atrasos, aumentando a disponibilidade. Contudo, os modelos CBM precisam de alterações e a integração de novos tipos de informação na modelação de manutenção que permitam melhorar os resultados.Objetivos: Esta tese pretende desenvolver uma nova metodologia baseada Bayesian control chart para prever as falhas de partes, incorporando dois tipos de dados: medições-chave de controlo de qualidade e parâmetros de condição do equipamento. Por outras palavras, as questões de investigação são direcionadas para diminuir custos de manutenção no processo de controlo.Métodos: Os modelos matemáticos implementados neste estudo para desenvolver uma política ótima de CBM incluíram a análise de Weibull para verificação da propriedade de Markov, conceito de atraso de tempo para a modelação da deterioração, PSO e simulação de Monte Carlo. Estes modelos são usados para encontrar o limite superior de controlo e o intervalo de monotorização para minimizar a função de custos de manutenção.Resultados: A principal contribuição desta tese é que o modelo proposto melhora os resultados dos modelos anteriores, baseando-se na hipótese de que, usando simultaneamente dados dos parâmetros dos equipamentos e medições de controlo de qualidade. Assim obtém-se uma melhoria a eficácia do modelo integrado de Bayesian control chart para a manutenção condicionada

    Condition Assessment Models for Sewer Pipelines

    Get PDF
    Underground pipeline system is a complex infrastructure system that has significant impact on social, environmental and economic aspects. Sewer pipeline networks are considered to be an extremely expensive asset. This study aims to develop condition assessment models for sewer pipeline networks. Seventeen factors affecting the condition of sewer network were considered for gravity pipelines in addition to the operating pressure for pressurized pipelines. Two different methodologies were adopted for models’ development. The first method by using an integrated Fuzzy Analytic Network Process (FANP) and Monte-Carlo simulation and the second method by using FANP, fuzzy set theory (FST) and Evidential Reasoning (ER). The models’ output is the assessed pipeline condition. In order to collect the necessary data for developing the models, questionnaires were distributed among experts in sewer pipelines in the state of Qatar. In addition, actual data for an existing sewage network in the state of Qatar was used to validate the models’ outputs. The “Ground Disturbance” factor was found to be the most influential factor followed by the “Location” factor with a weight of 10.6% and 9.3% for pipelines under gravity and 8.8% and 8.6% for pipelines under pressure, respectively. On the other hand, the least affecting factor was the “Length” followed by “Diameter” with weights of 2.2% and 2.5% for pipelines under gravity and 2.5% and 2.6% for pipelines under pressure. The developed models were able to satisfactorily assess the conditions of deteriorating sewer pipelines with an average validity of approximately 85% for the first approach and 86% for the second approach. The developed models are expected to be a useful tool for decision makers to properly plan for their inspections and provide effective rehabilitation of sewer networks.1)- NPRP grant # (NPRP6-357-2-150) from the QatarNational Research Fund (Member of Qatar Foundation) 2)-Tarek Zayed, Professor of Civil Engineeringat Concordia University for his support in the analysis part, the Public Works 3)-Authority of Qatar (ASHGAL) for their support in the data collection

    Statistical Inference for a Virtual Age Reliability Model

    Get PDF
    During the lifetime of a system, repairs may be performed when the system fails. It is most common to assume either perfect repair or minimal repair. However, a repair actually will sometimes be between minimal repair and perfect repair, which is called imperfect repair. The Kijima type I virtual age model can be used to model these types of repairable systems. This model contains a parameter which reflects the restoration level after each repair. This thesis considers statistical inference for the Kijima type I model, which deals with repairable systems that can be restored to the operating state through system replacement or repair after the system fails. We present Bayesian analysis for the Kijima type I virtual age model, including consideration of the system's overall time to failure if a given number of repairs is possible. We use both Bayesian analysis, which specifies a single prior distribution, and a robust Bayesian analysis approach. A set of prior distributions is used in robust Bayesian analysis in order to deal with uncertainty regarding prior knowledge of the Kijima type I model parameters in a flexible way and to enhance the objectivity of the analysis in an imprecise Bayesian framework by computing predictive posterior distribution bounds for the reliability function of the system. Finally, we discuss the use of the developed methods to decide about optimal replacement. Optimal replacement is the methodology of replacing a system component at the most advantageous or efficient moment to increase its performance and minimize overall expected costs. Two policies are introduced with cost functions based on time and number of failures to make a decision on optimal replacement time or optimal number of failures of the system under the Kijima type I model using the Weibull distribution. These policies illustrate how the Bayesian and robust Bayesian analysis can be used for inferences about the optimal replacement and the expected total cost

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability
    corecore