7 research outputs found

    Transitioning with confidence during contact/non-contact scenarios

    Get PDF
    In this work, we propose a dynamical system based strategy for establishing a stable contact with convex shaped surfaces during non-contact/contact scenarios. A contact is called stable if the impact occurs only once and the robot remains in contact with the surface after the impact. Realizing a stable contact is particularly challenging as the contact leaves a very short time-window for the robot to react properly to the impact force. In this paper, we propose a strategy consisting of locally modulating the robot’s motion in a way that it aligns with the surface before making the contact. We show theoretically and empirically that by using the modulation framework, the contact is stable and the robot stays in contact with the surface after the first impact

    Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime

    Full text link
    An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.Comment: Published at http://dx.doi.org/10.1214/009053604000000021 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Stochastic Sampling Algorithms for State Estimation of Jump Markov Linear Systems

    No full text
    Jump Markov linear systems are linear systems whose parameters evolve with time according to a finite-state Markov chain. Given a set of observations, our aim is to estimate the states of the finite-state Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain or the state of the jump Markov linear system grows exponentially in the number of observations

    Robust learning of probabilistic hybrid models

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2008.Includes bibliographical references (p. 125-127).Advances in autonomy, in the fields of control, estimation, and diagnosis, have improved immensely, as seen by spacecraft that navigate toward pinpoint landings, or speech recognition enabled in hand-held devices. Arguably the most important step to controlling and improving a system, is to understand that system. For this reason, accurate models are essential for continued advancements in the field of autonomy. Hybrid stochastic models, such as JMLS and LPHA, allow for representational accuracy of a general scope of problems. The goal of this thesis is to develop a robust method for learning accurate hybrid models automatically from data. A robust method should learn a set of model parameters, but should also avoid convergence to locally optimal solutions that reduce accuracy, and should be less sensitive to sparse or poor quality observation data. These three goals are the focus of this thesis. We present the HML-LPHA algorithm that uses approximate EM for learning maximum likelihood model parameters of LPHA, given a sequence of control inputs {u}0T, and outputs, {y}T+I 1 We implement the algorithm in a scenario that simulates the mechanical wheel failure of the MER Spirit rover wheel and demonstrate empirical convergence of the algorithm. Local convergence is a limitation of many optimization approaches for multimodal functions, including EM. For model learning, this can mean a severe compromise in accuracy. We present the kMeans-EM algorithm, that iteratively learns the locations and shapes of explored local maxima of our model likelihood function, and focuses the search away from these areas of the solution space toward undiscovered maxima that are promising apriori. We find the kMeans-EM algorithm demonstrates iteratively increasing improvement over a Random Restarts method with respect to learning sets of model parameters with higher likelihood values, and reducing Euclidean distance to the true set of model parameters. Lastly, the AHML-LPHA algorithm is an active hybrid model learning approach that augments sparse, and/or very noisy training data, with limited queries of the discrete state.(cont.) We use an active approach for adding data to our training set, where we query at points that obtain the greatest reduction in uncertainty of the distribution over the hybrid state trajectories. Empirical evidence indicates that querying only 6% of the time reduces continous state squared error and MAP mode estimate error of the discrete state. We also find that when the passive learner, HML-LPHA, diverges due to poor initialization or training data, the AHML-LPHA algorithm is capable of convergence; at times, just one query allows for convergence, demonstrating a vast improvement in learning capacity with a very limited amount of data augmentation.by Stephanie Gil.S.M
    corecore