133 research outputs found

    Bayesian inference for indirectly observed stochastic processes, applications to epidemic modelling

    Get PDF
    Stochastic processes are mathematical objects that offer a probabilistic representation of how some quantities evolve in time. In this thesis we focus on estimating the trajectory and parameters of dynamical systems in cases where only indirect observations of the driving stochastic process are available. We have first explored means to use weekly recorded numbers of cases of Influenza to capture how the frequency and nature of contacts made with infected individuals evolved in time. The latter was modelled with diffusions and can be used to quantify the impact of varying drivers of epidemics as holidays, climate, or prevention interventions. Following this idea, we have estimated how the frequency of condom use has evolved during the intervention of the Gates Foundation against HIV in India. In this setting, the available estimates of the proportion of individuals infected with HIV were not only indirect but also very scarce observations, leading to specific difficulties. At last, we developed a methodology for fractional Brownian motions (fBM), here a fractional stochastic volatility model, indirectly observed through market prices. The intractability of the likelihood function, requiring augmentation of the parameter space with the diffusion path, is ubiquitous in this thesis. We aimed for inference methods robust to refinements in time discretisations, made necessary to enforce accuracy of Euler schemes. The particle Marginal Metropolis Hastings (PMMH) algorithm exhibits this mesh free property. We propose the use of fast approximate filters as a pre-exploration tool to estimate the shape of the target density, for a quicker and more robust adaptation phase of the asymptotically exact algorithm. The fBM problem could not be treated with the PMMH, which required an alternative methodology based on reparameterisation and advanced Hamiltonian Monte Carlo techniques on the diffusion pathspace, that would also be applicable in the Markovian setting

    Nonlocal Models in Biology and Life Sciences: Sources, Developments, and Applications

    Full text link
    Nonlocality is important in realistic mathematical models of physical and biological systems at small-length scales. It characterizes the properties of two individuals located in different locations. This review illustrates different nonlocal mathematical models applied to biology and life sciences. The major focus has been given to sources, developments, and applications of such models. Among other things, a systematic discussion has been provided for the conditions of pattern formations in biological systems of population dynamics. Special attention has also been given to nonlocal interactions on networks, network coupling and integration, including models for brain dynamics that provide us with an important tool to better understand neurodegenerative diseases. In addition, we have discussed nonlocal modelling approaches for cancer stem cells and tumor cells that are widely applied in the cell migration processes, growth, and avascular tumors in any organ. Furthermore, the discussed nonlocal continuum models can go sufficiently smaller scales applied to nanotechnology to build biosensors to sense biomaterial and its concentration. Piezoelectric and other smart materials are among them, and these devices are becoming increasingly important in the digital and physical world that is intrinsically interconnected with biological systems. Additionally, we have reviewed a nonlocal theory of peridynamics, which deals with continuous and discrete media and applies to model the relationship between fracture and healing in cortical bone, tissue growth and shrinkage, and other areas increasingly important in biomedical and bioengineering applications. Finally, we provided a comprehensive summary of emerging trends and highlighted future directions in this rapidly expanding field.Comment: 71 page

    Scalable Computational Algorithms for Geo-spatial Covid-19 Spread in High Performance Computing

    Full text link
    A nonlinear partial differential equation (PDE) based compartmental model of COVID-19 provides a continuous trace of infection over space and time. Finer resolutions in the spatial discretization, the inclusion of additional model compartments and model stratifications based on clinically relevant categories contribute to an increase in the number of unknowns to the order of millions. We adopt a parallel scalable solver allowing faster solutions for these high fidelity models. The solver combines domain decomposition and algebraic multigrid preconditioners at multiple levels to achieve the desired strong and weak scalability. As a numerical illustration of this general methodology, a five-compartment susceptible-exposed-infected-recovered-deceased (SEIRD) model of COVID-19 is used to demonstrate the scalability and effectiveness of the proposed solver for a large geographical domain (Southern Ontario). It is possible to predict the infections up to three months for a system size of 92 million (using 1780 processes) within 7 hours saving months of computational effort needed for the conventional solvers

    Predicting population extinction in lattice-based birth-death-movement models

    Full text link
    The question of whether a population will persist or go extinct is of key interest throughout ecology and biology. Various mathematical techniques allow us to generate knowledge regarding individual behaviour, which can be analysed to obtain predictions about the ultimate survival or extinction of the population. A common model employed to describe population dynamics is the lattice-based random walk model with crowding (exclusion). This model can incorporate behaviour such as birth, death and movement, while including natural phenomena such as finite size effects. Performing sufficiently many realisations of the random walk model to extract representative population behaviour is computationally intensive. Therefore, continuum approximations of random walk models are routinely employed. However, standard continuum approximations are notoriously incapable of making accurate predictions about population extinction. Here, we develop a new continuum approximation, the state space diffusion approximation, which explicitly accounts for population extinction. Predictions from our approximation faithfully capture the behaviour in the random walk model, and provides additional information compared to standard approximations. We examine the influence of the number of lattice sites and initial number of individuals on the long-term population behaviour, and demonstrate the reduction in computation time between the random walk model and our approximation

    The Effect of Malaysia General Election on Financial Network: An Evidence from Shariah-Compliant Stocks on Bursa Malaysia

    Get PDF
    Instead of focusing the volatility of the market, the market participants should consider on how the general election affects the correlation between the stocks during 14th general election Malaysia. The 14th general election of Malaysia was held on 9th May 2018. This event has a great impact towards the stocks listed on Bursa Malaysia. Thus, this study investigates the effect of 14th general election Malaysia towards the correlation between stock in Bursa Malaysia specifically the shariah-compliant stock. In addition, this paper examines the changes in terms of network topology for the duration, sixth months before and after the general election. The minimum spanning tree was used to visualize the correlation between the stocks. Also, the centrality measure, namely degree, closeness and betweenness were computed to identify if any changes of stocks that plays a crucial role in the network for the duration of before and after 14th general election Malaysia

    An Initial Framework Assessing the Safety of Complex Systems

    Get PDF
    Trabajo presentado en la Conference on Complex Systems, celebrada online del 7 al 11 de diciembre de 2020.Atmospheric blocking events, that is large-scale nearly stationary atmospheric pressure patterns, are often associated with extreme weather in the mid-latitudes, such as heat waves and cold spells which have significant consequences on ecosystems, human health and economy. The high impact of blocking events has motivated numerous studies. However, there is not yet a comprehensive theory explaining their onset, maintenance and decay and their numerical prediction remains a challenge. In recent years, a number of studies have successfully employed complex network descriptions of fluid transport to characterize dynamical patterns in geophysical flows. The aim of the current work is to investigate the potential of so called Lagrangian flow networks for the detection and perhaps forecasting of atmospheric blocking events. The network is constructed by associating nodes to regions of the atmosphere and establishing links based on the flux of material between these nodes during a given time interval. One can then use effective tools and metrics developed in the context of graph theory to explore the atmospheric flow properties. In particular, Ser-Giacomi et al. [1] showed how optimal paths in a Lagrangian flow network highlight distinctive circulation patterns associated with atmospheric blocking events. We extend these results by studying the behavior of selected network measures (such as degree, entropy and harmonic closeness centrality)at the onset of and during blocking situations, demonstrating their ability to trace the spatio-temporal characteristics of these events.This research was conducted as part of the CAFE (Climate Advanced Forecasting of sub-seasonal Extremes) Innovative Training Network which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 813844

    Flexible estimation of temporal point processes and graphs

    Get PDF
    Handling complex data types with spatial structures, temporal dependencies, or discrete values, is generally a challenge in statistics and machine learning. In the recent years, there has been an increasing need of methodological and theoretical work to analyse non-standard data types, for instance, data collected on protein structures, genes interactions, social networks or physical sensors. In this thesis, I will propose a methodology and provide theoretical guarantees for analysing two general types of discrete data emerging from interactive phenomena, namely temporal point processes and graphs. On the one hand, temporal point processes are stochastic processes used to model event data, i.e., data that comes as discrete points in time or space where some phenomenon occurs. Some of the most successful applications of these discrete processes include online messages, financial transactions, earthquake strikes, and neuronal spikes. The popularity of these processes notably comes from their ability to model unobserved interactions and dependencies between temporally and spatially distant events. However, statistical methods for point processes generally rely on estimating a latent, unobserved, stochastic intensity process. In this context, designing flexible models and consistent estimation methods is often a challenging task. On the other hand, graphs are structures made of nodes (or agents) and edges (or links), where an edge represents an interaction or relationship between two nodes. Graphs are ubiquitous to model real-world social, transport, and mobility networks, where edges can correspond to virtual exchanges, physical connections between places, or migrations across geographical areas. Besides, graphs are used to represent correlations and lead-lag relationships between time series, and local dependence between random objects. Graphs are typical examples of non-Euclidean data, where adequate distance measures, similarity functions, and generative models need to be formalised. In the deep learning community, graphs have become particularly popular within the field of geometric deep learning. Structure and dependence can both be modelled by temporal point processes and graphs, although predominantly, the former act on the temporal domain while the latter conceptualise spatial interactions. Nonetheless, some statistical models combine graphs and point processes in order to account for both spatial and temporal dependencies. For instance, temporal point processes have been used to model the birth times of edges and nodes in temporal graphs. Moreover, some multivariate point processes models have a latent graph parameter governing the pairwise causal relationships between the components of the process. In this thesis, I will notably study such a model, called the Hawkes model, as well as graphs evolving in time. This thesis aims at designing inference methods that provide flexibility in the contexts of temporal point processes and graphs. This manuscript is presented in an integrated format, with four main chapters and two appendices. Chapters 2 and 3 are dedicated to the study of Bayesian nonparametric inference methods in the generalised Hawkes point process model. While Chapter 2 provides theoretical guarantees for existing methods, Chapter 3 also proposes, analyses, and evaluates a novel variational Bayes methodology. The other main chapters introduce and study model-free inference approaches for two estimation problems on graphs, namely spectral methods for the signed graph clustering problem in Chapter 4, and a deep learning algorithm for the network change point detection task on temporal graphs in Chapter 5. Additionally, Chapter 1 provides an introduction and background preliminaries on point processes and graphs. Chapter 6 concludes this thesis with a summary and critical thinking on the works in this manuscript, and proposals for future research. Finally, the appendices contain two supplementary papers. The first one, in Appendix A, initiated after the COVID-19 outbreak in March 2020, is an application of a discrete-time Hawkes model to COVID-related deaths counts during the first wave of the pandemic. The second work, in Appendix B, was conducted during an internship at Amazon Research in 2021, and proposes an explainability method for anomaly detection models acting on multivariate time series
    corecore