44,144 research outputs found

    Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping

    Get PDF
    Recent work on simultaneous trajectory estimation and mapping (STEAM) for mobile robots has found success by representing the trajectory as a Gaussian process. Gaussian processes can represent a continuous-time trajectory, elegantly handle asynchronous and sparse measurements, and allow the robot to query the trajectory to recover its estimated position at any time of interest. A major drawback of this approach is that STEAM is formulated as a batch estimation problem. In this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm. In particular, we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates, which we believe will greatly increase the practicality of Gaussian process methods for robot mapping and localization. Finally, we demonstrate the approach and its advantages on both synthetic and real datasets.Comment: 10 pages, 10 figure

    Stochastic 2-microlocal analysis

    Get PDF
    A lot is known about the H\"older regularity of stochastic processes, in particular in the case of Gaussian processes. Recently, a finer analysis of the local regularity of functions, termed 2-microlocal analysis, has been introduced in a deterministic frame: through the computation of the so-called 2-microlocal frontier, it allows in particular to predict the evolution of regularity under the action of (pseudo-) differential operators. In this work, we develop a 2-microlocal analysis for the study of certain stochastic processes. We show that moments of the increments allow, under fairly general conditions, to obtain almost sure lower bounds for the 2-microlocal frontier. In the case of Gaussian processes, more precise results may be obtained: the incremental covariance yields the almost sure value of the 2-microlocal frontier. As an application, we obtain new and refined regularity properties of fractional Brownian motion, multifractional Brownian motion, stochastic generalized Weierstrass functions, Wiener and stable integrals.Comment: 35 page

    Learning in the Wild with Incremental Skeptical Gaussian Processes

    Full text link
    The ability to learn from human supervision is fundamental for personal assistants and other interactive applications of AI. Two central challenges for deploying interactive learners in the wild are the unreliable nature of the supervision and the varying complexity of the prediction task. We address a simple but representative setting, incremental classification in the wild, where the supervision is noisy and the number of classes grows over time. In order to tackle this task, we propose a redesign of skeptical learning centered around Gaussian Processes (GPs). Skeptical learning is a recent interactive strategy in which, if the machine is sufficiently confident that an example is mislabeled, it asks the annotator to reconsider her feedback. In many cases, this is often enough to obtain clean supervision. Our redesign, dubbed ISGP, leverages the uncertainty estimates supplied by GPs to better allocate labeling and contradiction queries, especially in the presence of noise. Our experiments on synthetic and real-world data show that, as a result, while the original formulation of skeptical learning produces over-confident models that can fail completely in the wild, ISGP works well at varying levels of noise and as new classes are observed.Comment: 7 pages, 3 figures, code: https://gitlab.com/abonte/incremental-skeptical-g

    Extremes of Independent Gaussian Processes

    Get PDF
    For every nNn\in\N, let X1n,...,XnnX_{1n},..., X_{nn} be independent copies of a zero-mean Gaussian process Xn={Xn(t),tT}X_n=\{X_n(t), t\in T\}. We describe all processes which can be obtained as limits, as nn\to\infty, of the process an(Mnbn)a_n(M_n-b_n), where Mn(t)=maxi=1,...,nXin(t)M_n(t)=\max_{i=1,...,n} X_{in}(t) and an,bna_n, b_n are normalizing constants. We also provide an analogous characterization for the limits of the process anLna_nL_n, where Ln(t)=mini=1,...,nXin(t)L_n(t)=\min_{i=1,...,n} |X_{in}(t)|.Comment: 19 page

    Mutual Information and Minimum Mean-square Error in Gaussian Channels

    Full text link
    This paper deals with arbitrarily distributed finite-power input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the input-output mutual information and the minimum mean-square error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discrete-time and continuous-time noncausal MMSE estimation. This fundamental information-theoretic result has an unexpected consequence in continuous-time nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is chosen uniformly distributed between 0 and SNR

    A general approach to small deviation via concentration of measures

    Full text link
    We provide a general approach to obtain upper bounds for small deviations P(yϵ) \mathbb{P}(\Vert y \Vert \le \epsilon) in different norms, namely the supremum and β\beta- H\"older norms. The large class of processes yy under consideration takes the form yt=Xt+0tasdsy_t= X_t + \int_0^t a_s d s, where XX and aa are two possibly dependent stochastic processes. Our approach provides an upper bound for small deviations whenever upper bounds for the \textit{concentration of measures} of LpL^p- norm of random vectors built from increments of the process XX and \textit{large deviation} estimates for the process aa are available. Using our method, among others, we obtain the optimal rates of small deviations in supremum and β\beta- H\"older norms for fractional Brownian motion with Hurst parameter H 12H\le\ \frac{1}{2}. As an application, we discuss the usefulness of our upper bounds for small deviations in pathwise stochastic integral representation of random variables motivated by the hedging problem in mathematical finance
    corecore