11 research outputs found

    Monocular navigation for long-term autonomy

    Get PDF
    We present a reliable and robust monocular navigation system for an autonomous vehicle. The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled. We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound. The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes. This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Review and classification of vision-based localisation techniques in unknown environments

    Get PDF
    International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position

    Vision-Based Path Following Without Calibration

    Get PDF

    Large scale vision-based navigation without an accurate global reconstruction

    Get PDF
    Autonomous cars will likely play an important role in the future. A vision system designed to support outdoor navigation for such vehicles has to deal with large dynamic environments, changing imaging conditions, and temporary occlusions by other moving objects. This paper presents a novel appearance-based navigation framework relying on a single perspective vision sensor, which is aimed towards resolving of the above issues. The solution is based on a hierarchical environment representation created during a teaching stage, when the robot is controlled by a human operator. At the top level, the representation contains a graph of key-images with extracted 2D features enabling a robust navigation by visual servoing. The information stored at the bottom level enables to efficiently predict the locations of the features which are currently not visible, and eventually (re-)start their tracking. The outstanding property of the proposed framework is that it enables robust and scalable navigation without requiring a globally consistent map, even in interconnected environments. This result has been confirmed by realistic off-line experiments and successful real-time navigation trials in public urban areas

    Energy-Efficient Digital Signal Processing Hardware Design.

    Full text link
    As CMOS technology has developed considerably in the last few decades, many SoCs have been implemented across different application areas due to reduced area and power consumption. Digital signal processing (DSP) algorithms are frequently employed in these systems to achieve more accurate operation or faster computation. However, CMOS technology scaling started to slow down recently and relatively large systems consume too much power to rely only on the scaling effect while system power budget such as battery capacity improves slowly. In addition, there exist increasing needs for miniaturized computing systems including sensor nodes that can accomplish similar operations with significantly smaller power budget. Voltage scaling is one of the most promising power saving techniques due to quadratic switching power reduction effect, making it necessary feature for even high-end processors. However, in order to achieve maximum possible energy efficiency, systems should operate in near or sub-threshold regimes where leakage takes significant portion of power. In this dissertation, a few key energy-aware design approaches are described. Considering prominent leakage and larger PVT variability in low operating voltages, multi-level energy saving techniques to be described are applied to key building blocks in DSP applications: architecture study, algorithm-architecture co-optimization, and robust yet low-power memory design. Finally, described approaches are applied to design examples including a visual navigation accelerator, ultra-low power biomedical SoC and face detection/recognition processor, resulting in 2~100 times power savings than state-of-the-art.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110496/1/djeon_1.pd

    Study of Future On-board GNSS/INS Hybridization Architectures

    Get PDF
    Un développement rapide et une densification du trafic aérien ont conduit à l'introduction de nouvelles opérations d'approches et d'atterrissage utilisant des trajectoires plus flexibles et des minimas plus exigeants. La plupart des opérations de navigation aérienne sont actuellement réalisées grâce au GNSS, augmenté par les systèmes GBAS, SBAS et ABAS qui permettent d'atteindre des opérations d'approches de précision (pour GBAS et SBAS). Cependant ces systèmes nécessitent la mise en place d'un réseau de station de référence relativement couteux et des diffusions constantes de messages aux utilisateurs de l'espace aérien. Afin de surmonter ces contraintes, le système ABAS intègre à bord des informations fournies par les systèmes de navigation inertielle (INS) ainsi améliorant les performances de navigation. Dans cette logique, les avions commerciaux actuels utilisent une solution de couplage des deux systèmes appelée hybridation GPS/baro-INS. Cette solution permet d'atteindre des niveaux de performance en termes de précision, intégrité, disponibilité et continuité supérieurs aux deux systèmes pris séparément. Malheureusement, les niveaux d'exigences requis par les opérations de précision ou les atterrissages automatiques ne peuvent pas encore être totalement couverts par les solutions d'hybridation actuelles. L'idée principale de cette thèse a été d'étendre le processus d'hybridation en incluant d'autres capteurs ou systèmes actuellement disponibles ou non à bord et d'évaluer les niveaux de performance atteints par cette solution de filtre d'hybridation global. L'objectif ciblé est de pouvoir fournir la plupart des paramètres de navigations pour les opérations les plus critiques avec le niveau de performance requis par les exigences OACI. Les opérations ciblées pendant l'étude étaient les approches de précision (en particulier les approches CAT III) et le roulage sur la piste. L'étude des systèmes vidéo a fait l'objet d'une attention particulière pendant la thèse. La navigation basée sur la vidéo est une solution autonome de navigation de plus en plus utilisée de nos jours axée sur des capteurs qui mesurent le mouvement du véhicule et observent l'environnement. Que cela soit pour compenser la perte ou la dégradation d'un des systèmes de navigation ou pour améliorer la solution existante, les intérêts de l'utilisation de la vidéo sont nombreux. ABSTRACT : The quick development of air traffic has led to the improvement of approach and landing operations by using flexible flight paths and by decreasing the minima required to perform these operations. Most of the aircraft operations are supported by the GNSS augmented with GBAS, SBAS and ABAS. SBAS or GBAS allow supporting navigation operations down to precision approaches. However, these augmentations do require an expensive network of reference receivers and real-time broadcast to the airborne user. To overcome, the ABAS system integrates on-board information provided by an INS so as to enhance the performance of the navigation system. In that scheme, INS is coupled with a GPS receiver in a GPS/baro-INS hybridization solution that is already performed on current commercial aircraft. This solution allows reaching better performance in terms of accuracy, integrity, availability and continuity than the two separated solutions. However the most stringent requirements for precision approaches or automatic landings cannot be fulfilled with the current hybridization. The main idea of this PhD study is then to extend the hybridization process by including other sensors already available on commercial aircraft or not and, to assess the performance reached by a global hybridization architecture. It aims at providing most of the navigation parameters in all operations with the required level of performance. The operations targeted by this hybridization are precision approaches, with a particular focus on CAT III precision approach and roll out on the runway. The study of video sensor has been particularly focused on in the thesis. Indeed video based navigation is a complete autonomous navigation opportunity only based on sensors that provide information from the dynamic of the vehicle and from the observation of the scenery. Moreover, from a possible compensation of any loss or degradation of a navigation system to the improvement of the navigation solution during the most critical operations, the interests of video are numerous

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment
    corecore