19,864 research outputs found

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm

    Synergistic combination of systems for structural health monitoring and earthquake early warning for structural health prognosis and diagnosis

    Get PDF
    Earthquake early warning (EEW) systems are currently operating nationwide in Japan and are in beta-testing in California. Such a system detects an earthquake initiation using online signals from a seismic sensor network and broadcasts a warning of the predicted location and magnitude a few seconds to a minute or so before an earthquake hits a site. Such a system can be used synergistically with installed structural health monitoring (SHM) systems to enhance pre-event prognosis and post-event diagnosis of structural health. For pre-event prognosis, the EEW system information can be used to make probabilistic predictions of the anticipated damage to a structure using seismic loss estimation methodologies from performance-based earthquake engineering. These predictions can support decision-making regarding the activation of appropriate mitigation systems, such as stopping traffic from entering a bridge that has a predicted high probability of damage. Since the time between warning and arrival of the strong shaking is very short, probabilistic predictions must be rapidly calculated and the decision making automated for the mitigation actions. For post-event diagnosis, the SHM sensor data can be used in Bayesian updating of the probabilistic damage predictions with the EEW predictions as a prior. Appropriate Bayesian methods for SHM have been published. In this paper, we use pre-trained surrogate models (or emulators) based on machine learning methods to make fast damage and loss predictions that are then used in a cost-benefit decision framework for activation of a mitigation measure. A simple illustrative example of an infrastructure application is presented
    corecore