622 research outputs found

    Bayesian inference for indirectly observed stochastic processes, applications to epidemic modelling

    Get PDF
    Stochastic processes are mathematical objects that offer a probabilistic representation of how some quantities evolve in time. In this thesis we focus on estimating the trajectory and parameters of dynamical systems in cases where only indirect observations of the driving stochastic process are available. We have first explored means to use weekly recorded numbers of cases of Influenza to capture how the frequency and nature of contacts made with infected individuals evolved in time. The latter was modelled with diffusions and can be used to quantify the impact of varying drivers of epidemics as holidays, climate, or prevention interventions. Following this idea, we have estimated how the frequency of condom use has evolved during the intervention of the Gates Foundation against HIV in India. In this setting, the available estimates of the proportion of individuals infected with HIV were not only indirect but also very scarce observations, leading to specific difficulties. At last, we developed a methodology for fractional Brownian motions (fBM), here a fractional stochastic volatility model, indirectly observed through market prices. The intractability of the likelihood function, requiring augmentation of the parameter space with the diffusion path, is ubiquitous in this thesis. We aimed for inference methods robust to refinements in time discretisations, made necessary to enforce accuracy of Euler schemes. The particle Marginal Metropolis Hastings (PMMH) algorithm exhibits this mesh free property. We propose the use of fast approximate filters as a pre-exploration tool to estimate the shape of the target density, for a quicker and more robust adaptation phase of the asymptotically exact algorithm. The fBM problem could not be treated with the PMMH, which required an alternative methodology based on reparameterisation and advanced Hamiltonian Monte Carlo techniques on the diffusion pathspace, that would also be applicable in the Markovian setting

    Modelling the effects of ecology on wildlife disease surveillance

    Get PDF
    Surveillance is the first line of defence against disease, whether to monitor endemic cycles or to detect emergent epidemics. Knowledge of disease in wildlife is of considerable importance for managing risks to humans, livestock and wildlife species. Recent public health concerns (e.g. Highly Pathogenic Avian Influenza, West Nile Virus, Ebola) have increased interest in wildlife disease surveillance. However, current practice is based on protocols developed for livestock systems that do not account for the potentially large fluctuations in host population density and disease prevalence seen in wildlife. A generic stochastic modelling framework was developed where surveillance of wildlife disease systems are characterised in terms of key demographic, epidemiological and surveillance parameters. Discrete and continuous state-space representations respectively, are simulated using the Gillespie algorithm and numerical solution of stochastic differential equations. Mathematical analysis and these simulation tools are deployed to show that demographic fluctuations and stochasticity in transmission dynamics can reduce disease detection probabilities and lead to bias and reduced precision in the estimates of prevalence obtained from wildlife disease surveillance. This suggests that surveillance designs based on current practice may lead to underpowered studies and provide poor characterisations of the risks posed by disease in wildlife populations. By parameterising the framework for specific wildlife host species these generic conclusions are shown to be relevant to disease systems of current interest. The generic framework was extended to incorporate spatial heterogeneity. The impact of design on the ability of spatially distributed surveillance networks to detect emergent disease at a regional scale was then assessed. Results show dynamic spatial reallocation of a fixed level of surveillance effort led to more rapid detection of disease than static designs. This thesis has shown that spatio-temporal heterogeneities impact on the efficacy of surveillance and should therefore be considered when undertaking surveillance of wildlife disease systems

    Validation and noise robustness assessment of microscopic anisotropy estimation with clinically feasible double diffusion encoding MRI

    Get PDF
    Purpose: Double diffusion encoding (DDE) MRI enables the estimation of microscopic diffusion anisotropy, yielding valuable information on tissue microstructure. A recent study proposed that the acquisition of rotationally invariant DDE metrics, typically obtained using a spherical “5‐design,” could be greatly simplified by assuming Gaussian diffusion, facilitating reduced acquisition times that are more compatible with clinical settings. Here, we aim to validate the new minimal acquisition scheme against the standard DDE 5‐design, and to quantify the proposed method's noise robustness to facilitate future clinical use. / Theory and Methods: DDE MRI experiments were performed on both ex vivo and in vivo rat brains at 9.4 T using the 5‐design and the proposed minimal design and taking into account the difference in the number of acquisitions. The ensuing microscopic fractional anisotropy (μFA) maps were compared over a range of b‐values up to 5000 s/mm2. Noise robustness was studied using analytical calculations and numerical simulations. / Results: The minimal protocol quantified μFA at an accuracy comparable to the estimates obtained by means of the more theoretically robust DDE 5‐design. μFA's sensitivity to noise was found to strongly depend on compartment anisotropy and tensor magnitude in a nonlinear manner. When μFA < 0.75 or when mean diffusivity is particularly low, very high signal‐to‐noise ratio is required for precise quantification of µFA. / Conclusion: Our work supports using DDE for quantifying microscopic diffusion anisotropy in clinical settings but raises hitherto overlooked precision issues when measuring μFA with DDE and typical clinical signal‐to‐noise ratio

    Friction, Vibration and Dynamic Properties of Transmission System under Wear Progression

    Get PDF
    This reprint focuses on wear and fatigue analysis, the dynamic properties of coating surfaces in transmission systems, and non-destructive condition monitoring for the health management of transmission systems. Transmission systems play a vital role in various types of industrial structure, including wind turbines, vehicles, mining and material-handling equipment, offshore vessels, and aircrafts. Surface wear is an inevitable phenomenon during the service life of transmission systems (such as on gearboxes, bearings, and shafts), and wear propagation can reduce the durability of the contact coating surface. As a result, the performance of the transmission system can degrade significantly, which can cause sudden shutdown of the whole system and lead to unexpected economic loss and accidents. Therefore, to ensure adequate health management of the transmission system, it is necessary to investigate the friction, vibration, and dynamic properties of its contact coating surface and monitor its operating conditions

    VI Workshop on Computational Data Analysis and Numerical Methods: Book of Abstracts

    Get PDF
    The VI Workshop on Computational Data Analysis and Numerical Methods (WCDANM) is going to be held on June 27-29, 2019, in the Department of Mathematics of the University of Beira Interior (UBI), Covilhã, Portugal and it is a unique opportunity to disseminate scientific research related to the areas of Mathematics in general, with particular relevance to the areas of Computational Data Analysis and Numerical Methods in theoretical and/or practical field, using new techniques, giving especial emphasis to applications in Medicine, Biology, Biotechnology, Engineering, Industry, Environmental Sciences, Finance, Insurance, Management and Administration. The meeting will provide a forum for discussion and debate of ideas with interest to the scientific community in general. With this meeting new scientific collaborations among colleagues, namely new collaborations in Masters and PhD projects are expected. The event is open to the entire scientific community (with or without communication/poster)

    Multilevel Monte Carlo approach for estimating reliability of electric distribution systems

    Get PDF
    Most of the power outages experienced by the customers are due to the failures in the electric distribution systems. However, the ultimate goal of a distribution system is to meet customer electricity demand by maintaining a satisfactory level of reliability with less interruption frequency and duration as well as less outage costs. Quantitative evaluation of reliability is, therefore, a significant aspect of the decision-making process in planning and designing for future expansion of network or reinforcement. Simulation approach of reliability evaluation is generally based on sequential Monte Carlo (MC) method which can consider the random nature of system components. Use of MC method for obtaining accurate estimates of the reliability can be computationally costly particularly when dealing with rare events (i.e. when high accuracy is required). This thesis proposes a simple and effective methodology for accelerating MC simulation in distribution systems reliability evaluation. The proposed method is based on a novel Multilevel Monte Carlo (MLMC) simulation approach. MLMC approach is a variance reduction technique for MC simulation which can reduce the computational burden of the MC method dramatically while both sampling and discretisation errors are considered for converging to a controllable accuracy level. The idea of MLMC is to consider a hierarchy of computational meshes (levels) instead of using single time discretisation level in MC method. Most of the computational effort in MLMC method is transferred from the finest level to the coarsest one, leading to substantial computational saving. As the simulations are conducted using multiple approximations, therefore the less accurate estimate on the preceding coarse level can be sequentially corrected by averages of the differences of the estimations of two consecutive levels in the hierarchy. In this dissertation, we will find the answers to the following questions: can MLMC method be used for reliability evaluation? If so, how MLMC estimators for reliability evaluation are constructed? Finally, how much computational savings can we expect through MLMC method over MC method? MLMC approach is implemented through solving the stochastic differential equations of random variables related to the reliability indices. The differential equations are solved using different discretisation schemes. In this work, the performance of two different discretisation schemes, Euler-Maruyama and Milstein are investigated for this purpose. We use the benchmark Roy Billinton Test System as the test system. Based on the proposed MLMC method, a number of reliability studies of distribution systems have been carried out in this thesis including customer interruption frequency and duration based reliability assessment, cost/benefits estimation, reliability evaluation incorporating different time-varying factors such as weather-dependent failure rate and restoration time of components, time-varying load and cost models of supply points. The numerical results that demonstrate the computational performances of the proposed method are presented. The performances of the MLMC and MC methods are compared. The results prove that MLMC method is computationally efficient compared to those derived from standard MC method and it can retain an acceptable level of accuracy. The novel computational tool including examples presented in this thesis will help system planners and utility managers to provide useful information of reliability of distribution networks. With the help of such tool they can take necessary steps to speed up the decision-making process of reliability improvement.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201

    Forecasting stylised features of electricity prices in the Australian National Electricity Market

    Get PDF
    This thesis tests whether forecast accuracy improves when models that explicitly capture the stylised features of the Australian National Electricity Market (NEM) are employed to generate predictions. It is believed that by explicitly modelling these features of electricity wholesale spot prices, the accuracy of the price forecast models can be improved when compared to standard alternative. The stylised features identified in data are mean-reversion, sudden short-lived and consecutive jumps and heavy tails. When employing models to capture the stylised features of electricity prices, the models necessarily become more complex and often contain a greater number of parameters which combine to mimic the characteristics observed in the price series. Throughout this thesis an adherence to the principle of parsimony (Makridakis, et al page 609) will be maintained; that is if two models effectively generate the same forecast performance the simpler model will be preferred whether it contains the stylised features or not. This is also known as Occum&amp;rsquo;s Razor. This investigation is important in terms of a better understanding of what models are more useful has the potential to lead to more accurate price forecasts which may result in less volatility in market prices leading to more efficient markets. Further, by assessing models that capture various stylised features it may be possible to infer the importance of particular features. Given that wholesale prices are a major determinant of how much end users pay for powering their homes and businesses, it is believed that a better understanding of what forecasting models work (and do not) will allow market participants to develop more successful (business) strategies for adjusting supply to meet demand and to assist with the valuation of financial assets as part of risk management. Additionally, a better understanding of the dynamics of electricity prices and its implications for successful forecasting is important for government policy makers, as Government sets the rules that govern the production and distribution of electricity. It is believed that by explicitly modelling the stylised features of electricity wholesale prices, forecast accuracy can be improved upon baseline models commonly used in quantitative finance. This thesis investigates the forecasting ability of two distinct modelling approaches which by construction capture the stylised characteristics of electricity prices. Namely, these are linear continuous time and non-linear modelling methods. The AR-GARCH model is chosen to be the standard approach in forecasting price series (Engle, 2001) and is taken as the benchmark model in this thesis. More specifically, this thesis aims to answer the following research questions: Does the application of continuous-time models in capturing the stylised features of Australian electricity wholesale spot prices improve forecasting ability upon the traditional AR-GARCH model? Does the application of non-linear forecast models in capturing the stylised features of Australian electricity wholesale spot prices improve forecast ability upon traditional AR-GARCH model? The continuous-time models examined in this thesis are; Geometric Brownian Motion (GBM), Mean-Reverting, and Mean-Reverting Jump-Diffusion processes. The inclusion of GBM in this thesis is due to it being the foundation for the Mean-Reverting and Jump-Diffusion models, which are considered in this thesis. Continuous-time models capture some of the main stylised features of electricity prices; Mean-Reverting process captures the mean-reversion (tendency of electricity prices to revert back to its long-term average over time) characteristics of electricity prices whilst Mean-Reverting and Jump-Diffusion process models the sudden jumps prevalent in Australian electricity prices. The models are in order such that each successive model extends the one preceding it. Note that each extension addresses a stylised feature of the data therefore the a priori expectation is that the forecasting performance will improve. The inclusion of the non-linear approach to forecasting Australian electricity prices is performed with the application of a Markov Regime-Switching model and the application of Extreme Value Theory (EVT) into electricity price modelling. The Markov Regime-Switching model is a non-linear modelling tool that is able to capture consecutive spikes prevalent in electricity prices that Mean-Reverting and Jump-Diffusion processes fail to capture. The application of EVT is included in this thesis so that heavy tails present in electricity prices can be adequately captured. Copulas are considered as a unique method that models the dependence structure of data. The forecasts based on the EVT model is built upon the application of Copula functions as these functions model the interdependence of prices within the separate regions of the Australian electricity markets. The models examined in this thesis are: 1. AR(1)-GARCH(1) 2. Geometric Brownian Motion 3. Mean-Reverting Model 4. Mean-Reverting and Jump-Diffusion Model 5. Markov Regime-Switching Model with spike distributions modelled with 6. -Gaussian distribution 7. -Log-Gaussian distribution and, 8. Extreme value Theory and Copula functions Each model under investigation mimics a known characteristic of electricity prices. Comparative performance evaluations of each model investigated in this thesis showed that the benchmark model is providing superior short-term forecasting ability
    corecore