2,555 research outputs found

    Robotic navigation of smooth contours

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.Includes bibliographical references (leaf 10).The goal of this work is to develop a method for robotic navigation of smooth contours depending on the current and desired locations and orientations. Efficient trajectory generation is an essential capability for many autonomous mobile robots, operating in a variety of situations such as military, medical, and home environments. In this thesis, we propose a method that is based on fitting a spline curve that passes from the initial position and orientation of the robot to a goal position and orientation. The spline is continually recomputed as the robot moves through space. This yields a simple and inefficient method for robot navigation. The method has been implemented and tested in simulation using Matlab and good performance has been demonstrated. Future work should perform experiments with this method on a real robot and should introduce obstacle detection and avoidance.by Justin C. Moore.S.B

    Scaling up a Boltzmann machine model of hippocampus with visual features for mobile robots

    Get PDF
    Previous papers [4], [5] have described a detailed mapping between biological hippocampal navigation and a temporal restricted Boltzmann machine [20] with unitary coherent particle filtering. These models have focused on the biological structures and used simplified microworlds in implemented examples. As a first step in scaling the model up towards practical bio-inspired robotic navigation, we present new results with the model applied to real world visual data, though still limited by a discretized configuration space. To extract useful features from visual input we apply the SURF transform followed by a new lamellae-based winner-take-all Dentate Gyrus. This new visual processing stream allows the navigation system to function without the need for a simplifying data assumption of the previous models, and brings the hippocampal model closer to being a practical robotic navigation system

    Simultaneous localisation and mapping: A stereo vision based approach

    Get PDF
    With limited dynamic range and poor noise performance, cameras still pose considerable challenges in the application of range sensors in the context of robotic navigation, especially in the implementation of Simultaneous Localisation and Mapping (SLAM) with sparse features. This paper presents a combination of methods in solving the SLAM problem in a constricted indoor environment using small baseline stereo vision. Main contributions include a feature selection and tracking algorithm, a stereo noise filter, a robust feature validation algorithm and a multiple hypotheses adaptive window positioning method in 'closing the loop'. These methods take a novel approach in that information from the image processing and robotic navigation domains are used in tandem to augment each other. Experimental results including a real-time implementation in an office-like environment are also presented. © 2006 IEEE

    Hierarchical Deep Learning Architecture For 10K Objects Classification

    Full text link
    Evolution of visual object recognition architectures based on Convolutional Neural Networks & Convolutional Deep Belief Networks paradigms has revolutionized artificial Vision Science. These architectures extract & learn the real world hierarchical visual features utilizing supervised & unsupervised learning approaches respectively. Both the approaches yet cannot scale up realistically to provide recognition for a very large number of objects as high as 10K. We propose a two level hierarchical deep learning architecture inspired by divide & conquer principle that decomposes the large scale recognition architecture into root & leaf level model architectures. Each of the root & leaf level models is trained exclusively to provide superior results than possible by any 1-level deep learning architecture prevalent today. The proposed architecture classifies objects in two steps. In the first step the root level model classifies the object in a high level category. In the second step, the leaf level recognition model for the recognized high level category is selected among all the leaf models. This leaf level model is presented with the same input object image which classifies it in a specific category. Also we propose a blend of leaf level models trained with either supervised or unsupervised learning approaches. Unsupervised learning is suitable whenever labelled data is scarce for the specific leaf level models. Currently the training of leaf level models is in progress; where we have trained 25 out of the total 47 leaf level models as of now. We have trained the leaf models with the best case top-5 error rate of 3.2% on the validation data set for the particular leaf models. Also we demonstrate that the validation error of the leaf level models saturates towards the above mentioned accuracy as the number of epochs are increased to more than sixty.Comment: As appeared in proceedings for CS & IT 2015 - Second International Conference on Computer Science & Engineering (CSEN 2015
    • …
    corecore