15,004 research outputs found

    AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs

    Full text link
    Stochastic differential equations are an important modeling class in many disciplines. Consequently, there exist many methods relying on various discretization and numerical integration schemes. In this paper, we propose a novel, probabilistic model for estimating the drift and diffusion given noisy observations of the underlying stochastic system. Using state-of-the-art adversarial and moment matching inference techniques, we avoid the discretization schemes of classical approaches. This leads to significant improvements in parameter accuracy and robustness given random initial guesses. On four established benchmark systems, we compare the performance of our algorithms to state-of-the-art solutions based on extended Kalman filtering and Gaussian processes.Comment: Published at the Thirty-sixth International Conference on Machine Learning (ICML 2019

    Semi-Supervised Learning for Sparsely-Labeled Sequential Data: Application to Healthcare Video Processing

    Full text link
    Labeled data is a critical resource for training and evaluating machine learning models. However, many real-life datasets are only partially labeled. We propose a semi-supervised machine learning training strategy to improve event detection performance on sequential data, such as video recordings, when only sparse labels are available, such as event start times without their corresponding end times. Our method uses noisy guesses of the events' end times to train event detection models. Depending on how conservative these guesses are, mislabeled false positives may be introduced into the training set (i.e., negative sequences mislabeled as positives). We further propose a mathematical model for estimating how many inaccurate labels a model is exposed to, based on how noisy the end time guesses are. Finally, we show that neural networks can improve their detection performance by leveraging more training data with less conservative approximations despite the higher proportion of incorrect labels. We adapt sequential versions of MNIST and CIFAR-10 to empirically evaluate our method, and find that our risk-tolerant strategy outperforms conservative estimates by 12 points of mean average precision for MNIST, and 3.5 points for CIFAR. Then, we leverage the proposed training strategy to tackle a real-life application: processing continuous video recordings of epilepsy patients to improve seizure detection, and show that our method outperforms baseline labeling methods by 10 points of average precision

    Basin structure of optimization based state and parameter estimation

    Full text link
    Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e. the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).Comment: 15 pages, 2 figure

    Inferring rate coefficents of biochemical reactions from noisy data with KInfer

    Get PDF
    Dynamical models of inter- and intra-cellular processes contain the rate constants of the biochemical reactions. These kinetic parameters are often not accessible directly through experiments, but they can be inferred from time-resolved data. Time resolved data, that is, measurements of reactant concentration at series of time points, are usually affected by different types of error, whose source can be both experimental and biological. The noise in the input data makes the estimation of the model parameters a very difficult task, as if the inference method is not sufficiently robust to the noise, the resulting estimates are not reliable. Therefore "noise-robust" methods that estimate rate constants with the maximum precision and accuracy are needed. In this report we present the probabilistic generative model of parameter inference implemented by the software prototype KInfer and we show the ability of this tool of estimating the rate coefficients of models of biochemical network with a good accuracy even from very noisy input data

    On an adaptive regularization for ill-posed nonlinear systems and its trust-region implementation

    Full text link
    In this paper we address the stable numerical solution of nonlinear ill-posed systems by a trust-region method. We show that an appropriate choice of the trust-region radius gives rise to a procedure that has the potential to approach a solution of the unperturbed system. This regularizing property is shown theoretically and validated numerically.Comment: arXiv admin note: text overlap with arXiv:1410.278
    corecore