39 research outputs found

    Recent Developments in Monocular SLAM within the HRI Framework

    Get PDF
    This chapter describes an approach to improve the feature initialization process in the delayed inverse-depth feature initialization monocular Simultaneous Localisation and Mapping (SLAM), using data provided by a robot’s camera plus an additional monocular sensor deployed in the headwear of the human component in a human-robot collaborative exploratory team. The robot and the human deploy a set of sensors that once combined provides the data required to localize the secondary camera worn by the human. The approach and its implementation are described along with experimental results demonstrating its performance. A discussion on the usual sensors within the robotics field, especially in SLAM, provides background to the advantages and capabilities of the system implemented in this research

    Reliable Navigation for SUAS in Complex Indoor Environments

    Get PDF
    Indoor environments are a particular challenge for Unmanned Aerial Vehicles (UAVs). Effective navigation through these GPS-denied environments require alternative localization systems, as well as methods of sensing and avoiding obstacles while remaining on-task. Additionally, the relatively small clearances and human presence characteristic of indoor spaces necessitates a higher level of precision and adaptability than is common in traditional UAV flight planning and execution. This research blends the optimization of individual technologies, such as state estimation and environmental sensing, with system integration and high-level operational planning. The combination of AprilTag visual markers, multi-camera Visual Odometry, and IMU data can be used to create a robust state estimator that describes position, velocity, and rotation of a multicopter within an indoor environment. However these data sources have unique, nonlinear characteristics that should be understood to effectively plan for their usage in an automated environment. The research described herein begins by analyzing the unique characteristics of these data streams in order to create a highly-accurate, fault-tolerant state estimator. Upon this foundation, the system built, tested, and described herein uses Visual Markers as navigation anchors, visual odometry for motion estimation and control, and then uses depth sensors to maintain an up-to-date map of the UAV\u27s immediate surroundings. It develops and continually refines navigable routes through a novel combination of pre-defined and sensory environmental data. Emphasis is put on the real-world development and testing of the system, through discussion of computational resource management and risk reduction

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Quantitative Analysis of Non-Linear Probabilistic State Estimation Filters for Deployment on Dynamic Unmanned Systems

    Get PDF
    The work conducted in this thesis is a part of an EU Horizon 2020 research initiative project known as DigiArt. This part of the DigiArt project presents and explores the design, formulation and implementation of probabilistically orientated state estimation algorithms with focus towards unmanned system positioning and three-dimensional (3D) mapping. State estimation algorithms are considered an influential aspect of any dynamic system with autonomous capabilities. Possessing the ability to predictively estimate future conditions enables effective decision making and anticipating any possible changes in the environment. Initial experimental procedures utilised a wireless ultra-wide band (UWB) based communication network. This system functioned through statically situated beacon nodes used to localise a dynamically operating node. The simultaneous deployment of this UWB network, an unmanned system and a Robotic Total Station (RTS) with active and remote tracking features enabled the characterisation of the range measurement errors associated with the UWB network. These range error metrics were then integrated into an Range based Extended Kalman Filter (R-EKF) state estimation algorithm with active outlier identification to outperform the native approach used by the UWB system for two-dimensional (2D) pose estimation.The study was then expanded to focus on state estimation in 3D, where a Six Degreeof-Freedom EKF (6DOF-EKF) was designed using Light Detection and Ranging (LiDAR) as its primary observation source. A two step method was proposed which extracted information between consecutive LiDAR scans. Firstly, motion estimation concerning Cartesian states x, y and the unmanned system’s heading (ψ) was achieved through a 2D feature matching process. Secondly, the extraction and alignment of ground planes from the LiDAR scan enabled motion extraction for Cartesian position z and attitude angles roll (θ) and pitch (φ). Results showed that the ground plane alignment failed when two scans were at 10.5◦ offset. Therefore, to overcome this limitation an Error State Kalman Filter (ES-KF) was formulated and deployed as a sub-system within the 6DOF-EKF. This enabled the successful tracking of roll, pitch and the calculation of z. The 6DOF-EKF was seen to outperform the R-EKF and the native UWB approach, as it was much more stable, produced less noise in its position estimations and provided 3D pose estimation

    PHROG: A Multimodal Feature for Place Recognition

    Get PDF
    International audienceLong-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR)

    Planning, Estimation and Control for Mobile Robot Localization with Application to Long-Term Autonomy

    Get PDF
    There may arise two kinds of challenges in the problem of mobile robot localization; (i) a robot may have an a priori map of its environment, in which case the localization problem boils down to estimating the robot pose relative to a global frame or (ii) no a priori map information is given, in which case a robot may have to estimate a model of its environment and localize within it. In the case of a known map, simultaneous planning while localizing is a crucial ability for operating under uncertainty. We first address this problem by designing a method to dynamically replan while the localization uncertainty or environment map is updated. Extensive simulations are conducted to compare the proposed method with the performance of FIRM (Feedback-based Information RoadMap). However, a shortcoming of this method is its reliance on a Gaussian assumption for the Probability Density Function (pdf) on the robot state. This assumption may be violated during autonomous operation when a robot visits parts of the environment which appear similar to others. Such situations lead to ambiguity in data association between what is seen and the robot’s map leading to a non-Gaussian pdf on the robot state. We address this challenge by developing a motion planning method to resolve situations where ambiguous data associations result in a multimodal hypothesis on the robot state. A Receding Horizon approach is developed, to plan actions that sequentially disambiguate a multimodal belief to achieve tight localization on the correct pose in finite time. In our method, disambiguation is achieved through active data associations by picking target states in the map which allow distinctive information to be observed for each belief mode and creating local feedback controllers to visit the targets. Experiments are conducted for a kidnapped physical ground robot operating in an artificial maze-like environment. The hardest challenge arises when no a priori information is present. In longterm tasks where a robot must drive for long durations before closing loops, our goal is to minimize the localization error growth rate such that; (i) accurate data associations can be made for loop closure, or (ii) in cases where loop closure is not possible, the localization error stays limited within some desired bounds. We analyze this problem and show that accurate heading estimation is key to limiting localization error drift. We make three contributions in this domain. First we present a method for accurate long-term localization using absolute orientation measurements and analyze the underlying structure of the SLAM problem and how it is affected by unbiased heading measurements. We show that consistent estimates over a 100km trajectory are possible and that the error growth rate can be controlled with active data acquisition. Then we study the more general problem when orientation measurements may not be present and develop a SLAM technique to separate orientation and position estimation. We show that our method’s accuracy degrades gracefully compared to the standard non-linear optimization based SLAM approach and avoids catastrophic failures which may occur due a bad initial guess in non-linear optimization. Finally we take our understanding of orientation sensing into the physical world and demonstrate a 2D SLAM technique that leverages absolute orientation sensing based on naturally occurring structural cues. We demonstrate our method using both high-fidelity simulations and a real-world experiment in a 66, 000 square foot warehouse. Empirical studies show that maps generated by our approach never suffer catastrophic failure, whereas existing scan matching based SLAM methods fail ≈ 50% of the time

    Computer vision based navigation for spacecraft proximity operations

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 219-226).The use of computer vision for spacecraft relative navigation and proximity operations within an unknown environment is an enabling technology for a number of future commercial and scientific space missions. This thesis presents three first steps towards a larger research initiative to develop and mature these technologies. The first step that is presented is the design and development of a " flight-traceable" upgrade to the Synchronize Position Hold Engage Reorient Experimental Satellites, known as the SPHERES Goggles. This upgrade enables experimental research and maturation of computer vision based navigation technologies on the SPHERES satellites. The second step that is presented is the development of an algorithm for vision based relative spacecraft navigation that uses a fiducial marker with the minimum number of known point correspondences. An experimental evaluation of this algorithm is presented that determines an upper bound on the accuracy and precision of this system. The third step towards vision based relative navigation in an unknown environment is a preliminary investigation into the computational issues associated with high performance embedded computing. The computational characteristics of vision based relative navigation algorithms are discussed along with the requirements that they impose on computational hardware. A trade study is performed which compares a number of dierent commercially available hardware architectures to determine which would provide the best computational performance per unit of electrical power.by Brent Edward Tweddle.S.M

    Characterisation of a nuclear cave environment utilising an autonomous swarm of heterogeneous robots

    Get PDF
    As nuclear facilities come to the end of their operational lifetime, safe decommissioning becomes a more prevalent issue. In many such facilities there exist ‘nuclear caves’. These caves constitute areas that may have been entered infrequently, or even not at all, since the construction of the facility. Due to this, the topography and nature of the contents of these nuclear caves may be unknown in a number of critical aspects, such as the location of dangerous substances or significant physical blockages to movement around the cave. In order to aid safe decommissioning, autonomous robotic systems capable of characterising nuclear cave environments are desired. The research put forward in this thesis seeks to answer the question: is it possible to utilise a heterogeneous swarm of autonomous robots for the remote characterisation of a nuclear cave environment? This is achieved through examination of the three key components comprising a heterogeneous swarm: sensing, locomotion and control. It will be shown that a heterogeneous swarm is not only capable of performing this task, it is preferable to a homogeneous swarm. This is due to the increased sensory and locomotive capabilities, coupled with more efficient explorational prowess when compared to a homogeneous swarm
    corecore