8 research outputs found

    Inference for Differential Equation Models using Relaxation via Dynamical Systems

    Full text link
    Statistical regression models whose mean functions are represented by ordinary differential equations (ODEs) can be used to describe phenomenons dynamical in nature, which are abundant in areas such as biology, climatology and genetics. The estimation of parameters of ODE based models is essential for understanding its dynamics, but the lack of an analytical solution of the ODE makes the parameter estimation challenging. The aim of this paper is to propose a general and fast framework of statistical inference for ODE based models by relaxation of the underlying ODE system. Relaxation is achieved by a properly chosen numerical procedure, such as the Runge-Kutta, and by introducing additive Gaussian noises with small variances. Consequently, filtering methods can be applied to obtain the posterior distribution of the parameters in the Bayesian framework. The main advantage of the proposed method is computation speed. In a simulation study, the proposed method was at least 14 times faster than the other methods. Theoretical results which guarantee the convergence of the posterior of the approximated dynamical system to the posterior of true model are presented. Explicit expressions are given that relate the order and the mesh size of the Runge-Kutta procedure to the rate of convergence of the approximated posterior as a function of sample size

    Particle learning for Bayesian non-parametric Markov Switching Stochastic Volatility model

    Get PDF
    This paper designs a Particle Learning (PL) algorithm for estimation of Bayesian nonparametric Stochastic Volatility (SV) models for financial data. The performance of this particle method is then compared with the standard Markov Chain Monte Carlo (MCMC) methods for non-parametric SV models. PL performs as well as MCMC, and at the same time allows for on-line type inference. The posterior distributions are updated as new data is observed, which is prohibitively costly using MCMC. Further, a new non-parametric SV model is proposed that incorporates Markov switching jumps.The proposed model is estimated by using PL and tested on simulated data. Finally, the performance of the two non-parametric SV models, with and without Markov switching, is compared by using real financial time series. The results show that including a Markov switching specification provides higher predictive power in the tails of the distribution.Virbickaite, A. and Ausín, C.M. are grateful for the financial support from MEC grant ECO2011-25706. Galeano, P. acknowledges financial support from MEC grant ECO2012- 3844

    Neural decoding with visual attention using sequential Monte Carlo for leaky integrate-and-fire neurons

    Get PDF
    How the brain makes sense of a complicated environment is an important question, and a first step is to be able to reconstruct the stimulus that give rise to an observed brain response. Neural coding relates neurobiological observations to external stimuli using computational methods. Encoding refers to how a stimulus affects the neuronal output, and entails constructing a neural model and parameter estimation. Decoding refers to reconstruction of the stimulus that led to a given neuronal output. Existing decoding methods rarely explain neuronal responses to complicated stimuli in a principled way. Here we perform neural decoding for a mixture of multiple stimuli using the leaky integrate-and-fire model describing neural spike trains, under the visual attention hypothesis of probability mixing in which the neuron only attends to a single stimulus at any given time. We assume either a parallel or serial processing visual search mechanism when decoding multiple simultaneous neurons. We consider one or multiple stochastic stimuli following Ornstein-Uhlenbeck processes, and dynamic neuronal attention that switches following discrete Markov processes. To decode stimuli in such situations, we develop various sequential Monte Carlo particle methods in different settings. The likelihood of the observed spike trains is obtained through the first-passage time probabilities obtained by solving the Fokker-Planck equations. We show that the stochastic stimuli can be successfully decoded by sequential Monte Carlo, and different particle methods perform differently considering the number of observed spike trains, the number of stimuli, model complexity, etc. The proposed novel decoding methods, which analyze the neural data through psychological visual attention theories, provide new perspectives to understand the brain

    Vehicle level health assessment through integrated operational scalable prognostic reasoners

    Get PDF
    Today’s aircraft are very complex in design and need constant monitoring of the systems to establish the overall health status. Integrated Vehicle Health Management (IVHM) is a major component in a new future asset management paradigm where a conscious effort is made to shift asset maintenance from a scheduled based approach to a more proactive and predictive approach. Its goal is to maximize asset operational availability while minimising downtime and the logistics footprint through monitoring deterioration of component conditions. IVHM involves data processing which comprehensively consists of capturing data related to assets, monitoring parameters, assessing current or future health conditions through prognostics and diagnostics engine and providing recommended maintenance actions. The data driven prognostics methods usually use a large amount of data to learn the degradation pattern (nominal model) and predict the future health. Usually the data which is run-to-failure used are accelerated data produced in lab environments, which is hardly the case in real life. Therefore, the nominal model is far from the present condition of the vehicle, hence the predictions will not be very accurate. The prediction model will try to follow the nominal models which mean more errors in the prediction, this is a major drawback of the data driven techniques. This research primarily presents the two novel techniques of adaptive data driven prognostics to capture the vehicle operational scalability degradation. Secondary the degradation information has been used as a Health index and in the Vehicle Level Reasoning System (VLRS). Novel VLRS are also presented in this research study. The research described here proposes a condition adaptive prognostics reasoning along with VLRS

    Multivariate stochastic loss reserving with common shock approaches

    Get PDF
    Les provisions techniques (ou « réserves ») constituent habituellement l'un des plus importants passifs au bilan d'un assureur IARD (Incendie, Accidents et Risques Divers). Conséquemment, il est crucial pour les assureurs de les estimer avec précision. En outre, un assureur IARD opère généralement dans plusieurs lignes d'affaires dont les risques ne sont pas parfaitement dépendants. Il en résulte un « bénéfice de diversification » qu'il est primordial de considérer pour son effet sur les réserves et le capital. Il est donc essentiel de prendre en compte la dépendance entre lignes d'affaires dans l'estimation des réserves. L'objectif de cette thèse est de développer de nouvelles approches d'évaluation des réserves pour des portefeuilles d'assurance avec lignes d'affaires dépendantes. Pour ce faire, nous explorons la technique de choc commun, une méthode populaire de modélisation de la dépendance qui offre plusieurs avantages, tels qu'une structure de dépendance explicite, une facilité d'interprétation et une construction parcimonieuse des matrices de corrélation. Afin de rendre les méthodes utiles à la pratique, nous incorporons au modèle des caractéristiques réalistes et souhaitables. Motivés par la richesse de la famille de distributions de probabilité Tweedie, laquelle recoupe les distributions Poisson, amma et bien d'autres, nous introduisons un cadre commun de choc Tweedie avec dépendance entre lignes d'affaires. Les propriétés attrayantes de ce cadre sont étudiées, y compris la flexibilité de ses marges, ses moments ayant une forme analytique et sa capacité d'inclure des masses à 0. Pour surmonter la complexité de la structure distributionnelle Tweedie, nous utilisons une approche bayésienne dans l'estimation des paramètres du modèle, que nous appliquons à un ensemble de données réelles. Nous formulons des remarques sur les caractéristiques pratiques de notre cadre. Les données sur les provisions techniques pour sinistres sont asymétriques par nature. C'est-à-dire, les montants de réclamations qu'on retrouve dans différentes cases d'un même triangle et entre différents triangles peuvent varier considérablement. Ceci s'explique car, habituellement, le nombre de sinistres est plus élevé dans les premières périodes de développement. Nous tenons compte explicitement de cette caractéristique dans nos modèles de chocs communs en y incluant un ajustement parcimonieux. Des illustrations utilisant des données théoriques (simulées) et réelles sont présentées. Enfin, dans la dernière partie de cette thèse, nous élaborons un cadre dynamique avec des facteurs évolutifs tenant compte des tendances de développement des sinistres pouvant changer avec le temps. Une dépendance entre années civiles est introduite à l'aide de chocs communs. Nous formulons également une méthode d'estimation adaptée à la structure des données de provisionnement des sinistres, que nous illustrons à l'aide de données réelles.Outstanding claims liability is usually one of the largest liabilities on the balance sheet of a general insurer. Therefore, it is critical for insurers to accurately estimate their outstanding claims. Furthermore, a general insurer typically operates in multiple business lines whose risks are not perfectly dependent. This results in ``diversification benefits", the consideration of which is crucial due to their effects on the aggregate reserves and capital. It is then essential to consider the dependence across business lines in the estimation of outstanding claims. The goal of this thesis is to develop new approaches to assess outstanding claims for portfolios of dependent lines. We explore the common shock technique for model developments, a very popular dependence modelling technique with distinctive strengths, such as explicit dependence structure, ease of interpretation, and parsimonious construction of correlation matrices. We also aim to enhance the practicality of our approaches by incorporating realistic and desirable model features. Motivated by the richness of the Tweedie distribution family which covers Poisson distributions, gamma distributions and many more, we introduce a common shock Tweedie framework with dependence across business lines. Desirable properties of this framework are studied, including its marginal flexibility, tractable moments, and ability to handle masses at 0. To overcome the complex distributional structure of the Tweedie framework, we formulate a Bayesian approach for model estimation and perform a real data illustration. Remarks on practical features of the framework are drawn. Loss reserving data possesses an unbalanced nature, that is, claims from different positions within and between loss triangles can vary widely as more claims typically develop in early development periods. We account for this feature explicitly in common shock models with a parsimonious common shock adjustment. Theoretical and real data illustrations are performed using the multivariate Tweedie framework. Finally, in the last part of this thesis, we develop a dynamic framework with evolutionary factors to account for claims development patterns that change over time. Calendar year dependence is introduced using common shocks. We also formulate an estimation approach that is tailored to the structure of loss reserving data and perform a real data illustration

    Bayesian non-parametrics for time-varying volatility models

    Get PDF
    Mención Internacional en el título de doctorPrograma Oficial de Doctorado en Economía de la Empresa y Métodos CuantitativosPresidente: Michael Peter Wiper; Secretaria: María Pilar Muñoz Gracia; Vocal: Roberto Casarí

    Sequential Bayesian Inference for Dynamic Linear Models of Sensor Data

    Get PDF
    Ph. D. ThesisWe develop a spatio-temporal model to analyse pairs of observations on temperature and humidity. The data consist of six months of observations at five locations collected from a sensor network deployed in North East England. The model for the temporal component takes the form of two coupled dynamic linear models (DLMs), specified marginally for temperature and conditionally for humidity given temperature. To account for dependence at nearby locations, the governing system equations include spatial e ects, specified using a Gaussian process. To understand the stochastic nature of the data, we perform fully Bayesian estimation for the model parameters and check the model fit via posterior distributions. The intractability of the posterior distribution necessitates the use of computationally intensive methods such as Markov chain Monte Carlo (MCMC). The main disadvantage of MCMC is computational ine ciency when dealing with large datasets. Therefore, we exploit a class of sequential Monte Carlo (SMC) algorithms known as particle filters, which sequentially approximate the posterior through a series of reweighting and resampling steps. The tractability of the observed data likelihood under the DLM admits the implementation of an iterated batch importance sampling (IBIS) scheme, which additionally uses a resample-move step to circumvent the particle degeneracy problem. To alleviate the computational burden brought from the resample-move step of IBIS, we develop a novel online version of IBIS by modifying the resample-move step through approximating the posterior over an observation window whose pre-specified length trades o accuracy and computational cost. Furthermore, performing the resampling step independently for batches of parameter samples allows a parallel implementation of the algorithm to be performed on a powerful multi-core high performance computing system. A comparison of observed measurements with their one-step and two-step forecast distributions shows that the model provides a good description of the underlying process and provides reasonable forecast accuracy
    corecore