1,122 research outputs found

    New variance reduction methods in Monte Carlo rare event simulation

    Get PDF
    Para sistemas que proveen algún tipo de servicio mientras están operativos y dejan de proveerlo cuando fallan, es de interés determinar parámetros como, por ejemplo, la probabilidad de encontrar el sistema en falla en un instante cualquiera, el tiempo medio transcurrido entre fallas, o cualquier medida capaz de reflejar la capacidad del sistema para proveer servicio. Las determinaciones de estas medidas de seguridad de funcionamiento se ven afectadas por diversos factores, entre ellos, el tamaño del sistema y la rareza de las fallas. En esta tesis se estudian algunos métodos concebidos para determinar estas medidas sobre sistemas grandes y altamente confiables, es decir sistemas formados por gran cantidad de componentes, en los que las fallas del sistema son eventos raros. Ya sea en forma directa o indirecta, parte de las las expresiones que permiten determinar las medidas de interés corresponden a la probabilidad de que el sistema se encuentre en algún estado de falla. De un modo u otro, estas expresiones evaluan la fracción —ponderada por la distribución de probabilidad de las configuraciones del sistema—entre el número de configuraciones en las que el sistema falla y la totalidad de las configuraciones posibles. Si el sistema es grande el cálculo exacto de estas probabilidades, y consecuentemente de las medidas de interés, puede resultar inviable. Una solución alternativa es estimar estas probabilidades mediante simulación. Uno de los mecanismos para hacer estas estimaciones es la simulación de tipo Monte Carlo, cuya versión más simple es la simulación en crudo o estándar. El problema es que si las fallas son raras, el número de iteraciones necesario para estimar estas probabilidades mediante simulación estándar con una precisión aceptable, puede resultar desmesuradamente grande. En esta tesis se analizan algunos métodos existentes para mejorar la simulación estándar en el contexto de eventos raros, se hacen análisis de varianza y se prueban los métodos sobre una variedad de modelos. En todos los casos la mejora se consigue a costa de una reducción de la varianza del estimador con respecto a la varianza del estimador estándar. Gracias a la reducción de varianza es posible estimar la probabilidad de ocurrencia de eventos raros con una precisión aceptable, a partir de un número razonable de iteraciones. Como parte central del trabajo se proponen dos métodos nuevos, uno relacionado con Spliting y otro relacionado con Monte Carlo Condicional. Splitting es un método de probada eficiencia en entornos en los que se busca evaluar desempeño y confiabilidad combinados, escasamente utilizado en la simulación de sistemas altamente confiables sobre modelos estáticos (sin evolución temporal). En vi su formulación básica Splitting hace un seguimiento de las trayectorias de un proceso estocástico a través de su espacio de estados y multiplica su número ante cada cruce de umbral, para un conjunto dado de umbrales distribuidos entre los estados inicial y final. Una de las propuestas de esta tesis es una adaptación de Splitting a un modelo estático de confiabilidad de redes. En el método propuesto se construye un proceso estocástico a partir de un tiempo ficticio en el cual los enlaces van cambiando de estado y se aplica Splitting sobre ese proceso. El método exhibe elevados niveles de precisión y robustez. Monte Carlo Condicional es un método clásico de reducción de varianza cuyo uso no está muy extendido en el contexto de eventos raros. En su formulación básica Monte Carlo Condicional evalúa las probabilidades de los eventos de interés, condicionando las variables indicatrices a eventos no raros y simples de detectar. El problema es que parte de esa evaluación incluye el cálculo exacto de algunas probabilidades del modelo. Uno de los métodos propuestos en esta tesis es una adaptación de Monte Carlo Condicional al análisis de modelos Markovianos de sistemas altamente confiables. La propuesta consiste en estimar las probabilidades cuyo valor exacto se necesita, mediante una aplicación recursiva de Monte Carlo Condicional. Se estudian algunas características de este modelo y se verifica su eficiencia en forma experimental.For systems that provide some kind of service while they are operational and stop providing it when they fail, it is of interest to determine parameters such as, for example, the probability of finding the system failed at any moment, the mean time between failures, or any measure that reflects the capacity of the system to provide service. The determination of these measures —known as dependability measures— is affected by a variety of factors, including the size of the system and the rarity of failures. This thesis studies some methods designed to determine these measures on large and highly reliable systems, i.e. systems formed by a large number of components, such that systems’ failures are rare events. Either directly or indirectly, part of the expressions for determining the measures of interest correspond to the probability that the system is in some state of failure. Somehow, this expressions evaluate the ratio —weighted by the probability distribution of the systems’ configurations— between the number of configurations in which the system fails and all possible configurations. If the system is large, the exact calculation of these probabilities, and consequently of the measures of interest, may be unfeasible. An alternative solution is to estimate these probabilities by simulation. One mechanism to make such estimation is Monte Carlo simulation, whose simplest version is crude or standard simulation. The problem is that if failures are rare, the number of iterations required to estimate this probabilities by standard simulation, with acceptable accuracy, may be extremely large. In this thesis some existing methods to improve the standard simulation in the context of rare events are analyzed, some variance analyses are made and the methods are tested empirically over a variety of models. In all cases the improvement is achieved at the expense of reducing the variance of the estimator with respect to the standard estimator’s variance. Due to this variance reduction, the probability of the occurrence of rare events, with acceptable accuracy, can be achieved in a reasonable number of iterations. As a central part of this work, two new methods are proposed, one of them related to Splitting and the other one related to Conditional Monte Carlo. Splitting is a widely used method in performance and performability analysis, but scarcely applied for simulating highly reliable systems over static models (models with no temporal evolution). In its basic formulation Splitting keeps track of the trajectories of a stochastic process through its state space and it splits or multiplies the number of them at each threshold cross, for a given set of thresholds distributed between the initial and the final state. One of the proposals of this thesis is an adaptation of Splitting to a static network reliability model. In the proposed method, a fictitious time stochastic process in which the network links keep changing their state is built, and Splitting is applied to this process. The method shows to be highly accurate and robust. Conditional Monte Carlo is a classical variance reduction technique, whose use is not widespread in the field of rare events. In its basic formulation Conditional Monte Carlo evaluates the probabilities of the events of interest, conditioning the indicator variables to not rare and easy to detect events. The problem is that part of this assessment includes the exact calculation of some probabilities in the model. One of the methods proposed in this thesis is an adaptation of Conditional Monte Carlo to the analysis of highly reliable Markovian systems. The proposal consists in estimating the probabilities whose exact value is needed, by means of a recursive application of Conditional Monte Carlo. Some features of this model are discussed and its efficiency is verified experimentally

    Online Sequential Monte Carlo smoother for partially observed stochastic differential equations

    Full text link
    This paper introduces a new algorithm to approximate smoothed additive functionals for partially observed stochastic differential equations. This method relies on a recent procedure which allows to compute such approximations online, i.e. as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. This online smoother cannot be used directly in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that a similar algorithm may still be defined for partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. We prove that this estimator is consistent and its performance are illustrated using data from two models

    CAPP_DYN: A Dynamic Microsimulation Model for the Italian Social Security System

    Get PDF
    We present the technical structure of CAPP_DYN, a population based dynamic microsimulation model for the analysis of long term redistributive effects of social policies, developed at CAPP (Centro di Analisi delle Politiche Pubbliche) to study the intergenerational and the intragenerational redistributive effects of reforms in the social security system. The model simulates probabilistically the socio-demographic and economic evolution of a representative sample of the Italian population for the period 2005-2050. After a short review of the existing similar models for the Italian economy, a rather detailed analysis and discussion of the functioning of the model as well as a description of estimation procedures employed in each single module of the models is offered.Dynamic microsimulation; lifetime and intragenerational redistribution; social security systems

    A new algorithm for prognostics using subset simulation

    Get PDF
    This work presents an efficient computational framework for prognostics by combining the particle filter-based prognostics principles with the technique of Subset Simulation, first developed in S.K. Au and J.L. Beck [Probabilistic Engrg. Mech., 16 (2001), pp. 263-277], which has been named PFP-SubSim. The idea behind PFP-SubSim algorithm is to split the multi-step-ahead predicted trajectories into multiple branches of selected samples at various stages of the process, which correspond to increasingly closer approximations of the critical threshold. Following theoretical development, discussion and an illustrative example to demonstrate its efficacy, we report on experience using the algorithm for making predictions for the end-of-life and remaining useful life in the challenging application of fatigue damage propagation of carbon-fibre composite coupons using structural health monitoring data. Results show that PFP-SubSim algorithm outperforms the traditional particle filter-based prognostics approach in terms of computational efficiency, while achieving the same, or better, measure of accuracy in the prognostics estimates. It is also shown that PFP-SubSim algorithm gets its highest efficiency when dealing with rare-event simulation

    Free energy reconstruction from steered dynamics without post-processing

    Full text link
    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-alpha. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, we accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.Comment: Accepted manuscript in Journal of Computational Physics, 7 figure

    Numerical approximation of BSDEs using local polynomial drivers and branching processes

    Full text link
    We propose a new numerical scheme for Backward Stochastic Differential Equations based on branching processes. We approximate an arbitrary (Lipschitz) driver by local polynomials and then use a Picard iteration scheme. Each step of the Picard iteration can be solved by using a representation in terms of branching diffusion systems, thus avoiding the need for a fine time discretization. In contrast to the previous literature on the numerical resolution of BSDEs based on branching processes, we prove the convergence of our numerical scheme without limitation on the time horizon. Numerical simulations are provided to illustrate the performance of the algorithm.Comment: 28 page

    Rare event simulation using reversible shaking transformations

    No full text
    We introduce random transformations called reversible shaking transformations which we use to design two schemes for estimating rare event probability. One is based on interacting particle systems (IPS) and the other on time-average on a single path (POP) using ergodic theorem. We discuss their convergence rates and provide numerical experiments including continuous stochastic processes and jump processes. Our examples cover rather important situations related to insurance, queueing system and random graph for instance. Both schemes have good performance, with a seemingly better one for POP
    corecore