7,137 research outputs found

    A novel particle filtering method for estimation of pulse pressure variation during spontaneous breathing

    Get PDF
    Background: We describe the first automatic algorithm designed to estimate the pulse pressure variation (PPVPPV) from arterial blood pressure (ABP) signals under spontaneous breathing conditions. While currently there are a few publicly available algorithms to automatically estimate PPVPPV accurately and reliably in mechanically ventilated subjects, at the moment there is no automatic algorithm for estimating PPVPPV on spontaneously breathing subjects. The algorithm utilizes our recently developed sequential Monte Carlo method (SMCM), which is called a maximum a-posteriori adaptive marginalized particle filter (MAM-PF). We report the performance assessment results of the proposed algorithm on real ABP signals from spontaneously breathing subjects. Results: Our assessment results indicate good agreement between the automatically estimated PPVPPV and the gold standard PPVPPV obtained with manual annotations. All of the automatically estimated PPVPPV index measurements (PPVautoPPVauto) were in agreement with manual gold standard measurements (PPVmanuPPVmanu) within ±4 % accuracy. Conclusion: The proposed automatic algorithm is able to give reliable estimations of PPVPPV given ABP signals alone during spontaneous breathing

    Analysing time dependent problems

    Get PDF
    Inverse analysis for time dependent problems is discussed i n this chapter. When time dependent processes are analysed, further uncertainties c ome from initial conditions as well as from time dependent boundary conditions and loads , in addition to model parameters. Inverse modelling techniques have been specifi cally developed for this class of problems, which exploit the availability of a set of measurement and/or mon- itoring data at given locations at subsequent time instants . Sequential Bayesian data assimilation is introduced, and a brief review of filtering t echniques is given. In fil- tering the problem unknown is the time evolution of the proba bility density function of the system state, described by means of appropriate time d ependent variables and time invariant parameters, conditioned to all previous obs ervations. Particle filtering is chosen to conceptually illustrate the methodology, by me ans of two simple introduc- tory examples

    Review of double beta experiments

    Full text link
    This paper is the first part of the manuscript written in April 2012 for my academic Accreditation to supervise research. It offers a review of the double beta experimental techniques. My purpose is to detail, for each technique, the different origins of background, how they can be identified, and how they can be reduced. Advantages and limitations are discussed. This review is organized as follows. First, the question of the possible Majorana nature for the neutrino is presented and the physic of neutrinoless double beta decay is summarized. Then I begin by presenting the tracko-calo NEMO-3 and SuperNEMO experiments. I've worked on these two experiments since 15 years. So it was natural to start with them with a relatively more exhaustive description. I will then present the germanium technique. I will then review the bolometer technique. I will describe in detail the recent progress in scintillating bolometers because I think that it is one of the most promising techniques. Finally I will review the large liquid scintillator detectors and Xenon TPC. The last chapter offers a summary of the different techniques and projects.Comment: 100 pages; Manuscript for Accreditation to supervise research (Univ. Paris-Sud 11), May 201

    Time-Varying Modeling of Glottal Source and Vocal Tract and Sequential Bayesian Estimation of Model Parameters for Speech Synthesis

    Get PDF
    abstract: Speech is generated by articulators acting on a phonatory source. Identification of this phonatory source and articulatory geometry are individually challenging and ill-posed problems, called speech separation and articulatory inversion, respectively. There exists a trade-off between decomposition and recovered articulatory geometry due to multiple possible mappings between an articulatory configuration and the speech produced. However, if measurements are obtained only from a microphone sensor, they lack any invasive insight and add additional challenge to an already difficult problem. A joint non-invasive estimation strategy that couples articulatory and phonatory knowledge would lead to better articulatory speech synthesis. In this thesis, a joint estimation strategy for speech separation and articulatory geometry recovery is studied. Unlike previous periodic/aperiodic decomposition methods that use stationary speech models within a frame, the proposed model presents a non-stationary speech decomposition method. A parametric glottal source model and an articulatory vocal tract response are represented in a dynamic state space formulation. The unknown parameters of the speech generation components are estimated using sequential Monte Carlo methods under some specific assumptions. The proposed approach is compared with other glottal inverse filtering methods, including iterative adaptive inverse filtering, state-space inverse filtering, and the quasi-closed phase method.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Tracking Rhythmicity in Biomedical Signals using Sequential Monte Carlo methods

    Get PDF
    Cyclical patterns are common in signals that originate from natural systems such as the human body and man-made machinery. Often these cyclical patterns are not perfectly periodic. In that case, the signals are called pseudo-periodic or quasi-periodic and can be modeled as a sum of time-varying sinusoids, whose frequencies, phases, and amplitudes change slowly over time. Each time-varying sinusoid represents an individual rhythmical component, called a partial, that can be characterized by three parameters: frequency, phase, and amplitude. Quasi-periodic signals often contain multiple partials that are harmonically related. In that case, the frequencies of other partials become exact integer multiples of that of the slowest partial. These signals are referred to as multi-harmonic signals. Examples of such signals are electrocardiogram (ECG), arterial blood pressure (ABP), and human voice. A Markov process is a mathematical model for a random system whose future and past states are independent conditional on the present state. Multi-harmonic signals can be modeled as a stochastic process with the Markov property. The Markovian representation of multi-harmonic signals enables us to use state-space tracking methods to continuously estimate the frequencies, phases, and amplitudes of the partials. Several research groups have proposed various signal analysis methods such as hidden Markov Models (HMM), short time Fourier transform (STFT), and Wigner-Ville distribution to solve this problem. Recently, a few groups of researchers have proposed Monte Carlo methods which estimate the posterior distribution of the fundamental frequency in multi-harmonic signals sequentially. However, multi-harmonic tracking is more challenging than single-frequency tracking, though the reason for this has not been well understood. The main objectives of this dissertation are to elucidate the fundamental obstacles to multi-harmonic tracking and to develop a reliable multi-harmonic tracker that can track cyclical patterns in multi-harmonic signals

    AFIT School of Engineering Contributions to Air Force Research and Technology. Calendar Year 1971

    Get PDF
    This report contains abstracts of Master of Science theses and Doctoral Dissertations completed during the 1971 calendar year at the School of Engineering, Air Force Institute of Technology

    Physics-based prognostic modelling of filter clogging phenomena

    Get PDF
    In industry, contaminant filtration is a common process to achieve a desired level of purification, since contaminants in liquids such as fuel may lead to performance drop and rapid wear propagation. Generally, clogging of filter phenomena is the primary failure mode leading to the replacement or cleansing of filter. Cascading failures and weak performance of the system are the unfortunate outcomes due to a clogged filter. Even though filtration and clogging phenomena and their effects of several observable parameters have been studied for quite some time in the literature, progression of clogging and its use for prognostics purposes have not been addressed yet. In this work, a physics based clogging progression model is presented. The proposed model that bases on a well-known pressure drop equation is able to model three phases of the clogging phenomena, last of which has not been modelled in the literature yet. In addition, the presented model is integrated with particle filters to predict the future clogging levels and to estimate the remaining useful life of fuel filters. The presented model has been implemented on the data collected from an experimental rig in the lab environment. In the rig, pressure drop across the filter, flow rate, and filter mesh images are recorded throughout the accelerated degradation experiments. The presented physics based model has been applied to the data obtained from the rig. The remaining useful lives of the filters used in the experimental rig have been reported in the paper. The results show that the presented methodology provides significantly accurate and precise prognostic results

    Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    Full text link
    In the wake of the intense effort made for the experimental CILEX project, numerical simulation cam- paigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic lim- itations both in terms of physical accuracy and computational performances. These limitations are illu- strated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-per- formance PIC code for high energy particle acceleration

    Design, Commissioning and Performance of the PIBETA Detector at PSI

    Full text link
    We describe the design, construction and performance of the PIBETA detector built for the precise measurement of the branching ratio of pion beta decay, pi+ -> pi0 e+ nu, at the Paul Scherrer Institute. The central part of the detector is a 240-module spherical pure CsI calorimeter covering 3*pi sr solid angle. The calorimeter is supplemented with an active collimator/beam degrader system, an active segmented plastic target, a pair of low-mass cylindrical wire chambers and a 20-element cylindrical plastic scintillator hodoscope. The whole detector system is housed inside a temperature-controlled lead brick enclosure which in turn is lined with cosmic muon plastic veto counters. Commissioning and calibration data were taken during two three-month beam periods in 1999/2000 with pi+ stopping rates between 1.3*E3 pi+/s and 1.3*E6 pi+/s. We examine the timing, energy and angular detector resolution for photons, positrons and protons in the energy range of 5-150 MeV, as well as the response of the detector to cosmic muons. We illustrate the detector signatures for the assorted rare pion and muon decays and their associated backgrounds.Comment: 117 pages, 48 Postscript figures, 5 tables, Elsevier LaTeX, submitted to Nucl. Instrum. Meth.

    On-line probabilistic classification with particle filters

    Get PDF
    corecore