7 research outputs found

    Multi-objective Mapping and Path Planning using Visual SLAM and Object Detection

    Get PDF
    Path planning of the autonomous robots is one of the crucial tasks that need to be achieved for mobile robots to navigate through the environment intelligently. The robot paths are typically planned utilizing map that is accessible at the time with a certain optimization objective such as to minimizing the travel distance, or time. This thesis proposes a multi-objective path planning approach by integrating Simultaneous Localization And Mapping (SLAM) with a graph based optimization approach and an object detection algorithm. The proposed approach aims not only to nd a path that minimizes travel distance but also to minimize the number of obstacles in the path to be followed. This thesis uses Visual SLAM (VSLAM) as the basis to generate graphs for global path planning. VSLAM generates a trajectory network which is usually in the form of a spare graph (if odometry based) or probabilistic relations on landmark estimates relative to the robot. An object detection algorithm is run in parallel to provide additional information on trajectory network graphs generated by the VSLAM, to be used in multi-objective path planning. The VSLAM, object detection, and path planning elds are typically studied independently, but this thesis links the these elds to solve the multi-objective path planning problem. The rst part of the thesis presents the connections and methodology on using the VSLAM and object detection to generate trajectory network graphs. The nodes are inserted to the graph when a new keyframe is needed in VSLAM. The distance travelled between the nodes is the rst criterion to minimize and is computed while traversing. In parallel to VSLAM, the object detection component quanti es the number of objects detected between the nodes. Only the pre-trained objects to detect are quanti ed and the trained objects in the thesis are cars and trucks. The number of objects are the two additional edge information added to the graph. Later in the thesis, the multi-objective path planning on the generated graphs is presented. The objective of path planning on graph is not just on minimizing the distance to travel but also on minimizing the number of cars and trucks it passes. The proposed design is tested using KITTI dataset which is specialized for autonomous driving and consists of many cars and trucks. The design is not limited to autonomous driving applications, but can be applied to other elds such as surveillance, rescuing, and many more with di erent objects to detect

    UAS Flight Path Planning and Collision Avoidance Based on Markov Decision Process

    Get PDF
    The growing interest and trend for deploying unmanned aircraft systems (UAS) in civil applications require robust traffic management approaches that can safely integrate the unmanned platforms into the airspace. Although there have been significant advances in autonomous navigation, especially in the ground vehicles domain, there are still challenges to address for navigation in a dynamic 3D environment that airspace presents. An integrated approach that facilitates semi-autonomous operations in dynamic environments and also allows for operators to stay in the loop for intervention may provide a workable and practical solution for safe UAS integration in the airspace. This thesis research proposes a new path planning method for UAS flying in a dynamic 3D environment shared by multiple aerial vehicles posing potential conflict risks. This capability is referred to as de-confliction in drone traffic management. It primarily targets applications such as UAM [1] where multiple flying manned and/or unmanned aircraft may be present. A new multi-staged algorithm is designed that combines AFP method and Harmonic functions with AKF and MDP for dynamic path planning. It starts with the prediction of aircraft traffic density in the area and then generates the UAS flight path in a way to minimize the risk of encounters and potential conflicts. Hardware-in-the-loop simulations of the algorithm in various scenarios are presented, with an RGB-D camera and Pixhawk Autopilot to track the target. Numerical simulations show satisfactory results in various scenarios for path planning that considerably reduces the risk of conflict with other static and dynamic obstacles. A comparison with the potential field is provided that illustrates the robust and fast of the MDP algorithm

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Robust non-Gaussian semantic simultaneous localization and mapping

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science in Aeronautics and Astronautics at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 2019.The recent success of object detection systems motivates object-based representations for robot navigation; i.e. semantic simultaneous localization and mapping (SLAM), in which we aim to jointly estimate the pose of the robot over time as well as the location and semantic class of observed objects. A solution to the semantic SLAM problem necessarily addresses the continuous inference problems where am I? and where are the objects?, but also the discrete inference problem what are the objects?. We consider the problem of semantic SLAM under non-Gaussian uncertainty. The most prominent case in which this arises is from data association uncertainty, where we do not know with certainty what objects in the environment caused the measurement made by our sensor. The semantic class of an object can help to inform data association; a detection classified as a door is unlikely to be associated to a chair object. However, detectors are imperfect, and incorrect classification of objects can be detrimental to data association. While previous approaches seek to eliminate such measurements, we instead model the robot and landmark state uncertainty induced by data association in the hopes that new measurements may disambiguate state estimates, and that we may provide representations useful for developing decisionmaking strategies where a robot can take actions to mitigate multimodal uncertainty. The key insight we leverage is that the semantic SLAM problem with unknown data association can be reframed as a non-Gaussian inference problem. We present two solutions to the resulting problem: we first assume Gaussian measurement models, and non-Gaussianity only due to data association uncertainty. We then relax this assumption and provide a method that can cope with arbitrary non-Gaussian measurement models. We show quantitatively on both simulated and real data that both proposed methods have robustness advantages as compared to traditional solutions when data associations are uncertain.This work was partially supported by the Office of Naval Research under grants N00014-18-1-2832 and N00014-16-2628, as well as the National Science Foundation (NSF) Graduate Research Fellowship

    Visual Perception System for Aerial Manipulation: Methods and Implementations

    Get PDF
    La tecnolog铆a se evoluciona a gran velocidad y los sistemas aut贸nomos est谩n empezado a ser una realidad. Las compa帽铆as est谩n demandando, cada vez m谩s, soluciones robotizadas para mejorar la eficiencia de sus operaciones. Este tambi茅n es el caso de los robots a茅reos. Su capacidad 煤nica de moverse libremente por el aire los hace excelentes para muchas tareas que son tediosas o incluso peligrosas para operadores humanos. Hoy en d铆a, la gran cantidad de sensores y drones comerciales los hace soluciones muy tentadoras. Sin embargo, todav铆a se requieren grandes esfuerzos de obra humana para customizarlos para cada tarea debido a la gran cantidad de posibles entornos, robots y misiones. Los investigadores dise帽an diferentes algoritmos de visi贸n, hardware y sensores para afrontar las diferentes tareas. Actualmente, el campo de la rob贸tica manipuladora a茅rea est谩 emergiendo con el objetivo de extender la cantidad de aplicaciones que estos pueden realizar. Estas pueden ser entre otras, inspecci贸n, mantenimiento o incluso operar v谩lvulas u otras m谩quinas. Esta tesis presenta un sistema de manipulaci贸n a茅rea y un conjunto de algoritmos de percepci贸n para la automatizaci贸n de las tareas de manipulaci贸n a茅rea. El dise帽o completo del sistema es presentado y una serie de frameworks son presentados para facilitar el desarrollo de este tipo de operaciones. En primer lugar, la investigaci贸n relacionada con el an谩lisis de objetos para manipulaci贸n y planificaci贸n de agarre considerando diferentes modelos de objetos es presentado. Dependiendo de estos modelos de objeto, se muestran diferentes algoritmos actuales de an谩lisis de agarre y algoritmos de planificaci贸n para manipuladores simples y manipuladores duales. En Segundo lugar, el desarrollo de algoritmos de percepci贸n para detecci贸n de objetos y estimaci贸n de su posicione es presentado. Estos permiten al sistema identificar objetos de cualquier tipo en cualquier escena para localizarlos para efectuar las tareas de manipulaci贸n. Estos algoritmos calculan la informaci贸n necesaria para los an谩lisis de manipulaci贸n descritos anteriormente. En tercer lugar. Se presentan algoritmos de visi贸n para localizar el robot en el entorno al mismo tiempo que se elabora un mapa local, el cual es beneficioso para las tareas de manipulaci贸n. Estos mapas se enriquecen con informaci贸n sem谩ntica obtenida en los algoritmos de detecci贸n. Por 煤ltimo, se presenta el desarrollo del hardware relacionado con la plataforma a茅rea, el cual incluye unos manipuladores de bajo peso y la invenci贸n de una herramienta para realizar tareas de contacto con superficies r铆gidas que sirve de estimador de la posici贸n del robot. Todas las t茅cnicas presentadas en esta tesis han sido validadas con extensiva experimentaci贸n en plataformas reales.Technology is growing fast, and autonomous systems are becoming a reality. Companies are increasingly demanding robotized solutions to improve the efficiency of their operations. It is also the case for aerial robots. Their unique capability of moving freely in the space makes them suitable for many tasks that are tedious and even dangerous for human operators. Nowadays, the vast amount of sensors and commercial drones makes them highly appealing. However, it is still required a strong manual effort to customize the existing solutions to each particular task due to the number of possible environments, robot designs and missions. Different vision algorithms, hardware devices and sensor setups are usually designed by researchers to tackle specific tasks. Currently, aerial manipulation is being intensively studied to allow aerial robots to extend the number of applications. These could be inspection, maintenance, or even operating valves or other machines. This thesis presents an aerial manipulation system and a set of perception algorithms for the automation aerial manipulation tasks. The complete design of the system is presented and modular frameworks are shown to facilitate the development of these kind of operations. At first, the research about object analysis for manipulation and grasp planning considering different object models is presented. Depend on the model of the objects, different state of art grasping analysis are reviewed and planning algorithms for both single and dual manipulators are shown. Secondly, the development of perception algorithms for object detection and pose estimation are presented. They allows the system to identify many kind of objects in any scene and locate them to perform manipulation tasks. These algorithms produce the necessary information for the manipulation analysis described in the previous paragraph. Thirdly, it is presented how to use vision to localize the robot in the environment. At the same time, local maps are created which can be beneficial for the manipulation tasks. These maps are are enhanced with semantic information from the perception algorithm mentioned above. At last, the thesis presents the development of the hardware of the aerial platform which includes the lightweight manipulators and the invention of a novel tool that allows the aerial robot to operate in contact with static objects. All the techniques presented in this thesis have been validated throughout extensive experimentation with real aerial robotic platforms

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Rekonstruktion und skalierbare Detektion und Verfolgung von 3D Objekten

    Get PDF
    The task of detecting objects in images is essential for autonomous systems to categorize, comprehend and eventually navigate or manipulate its environment. Since many applications demand not only detection of objects but also the estimation of their exact poses, 3D CAD models can prove helpful since they provide means for feature extraction and hypothesis refinement. This work, therefore, explores two paths: firstly, we will look into methods to create richly-textured and geometrically accurate models of real-life objects. Using these reconstructions as a basis, we will investigate on how to improve in the domain of 3D object detection and pose estimation, focusing especially on scalability, i.e. the problem of dealing with multiple objects simultaneously.Objekterkennung in Bildern ist f眉r ein autonomes System von entscheidender Bedeutung, um seine Umgebung zu kategorisieren, zu erfassen und schlie脽lich zu navigieren oder zu manipulieren. Da viele Anwendungen nicht nur die Erkennung von Objekten, sondern auch die Sch盲tzung ihrer exakten Positionen erfordern, k枚nnen sich 3D-CAD-Modelle als hilfreich erweisen, da sie Mittel zur Merkmalsextraktion und Verfeinerung von Hypothesen bereitstellen. In dieser Arbeit werden daher zwei Wege untersucht: Erstens werden wir Methoden untersuchen, um strukturreiche und geometrisch genaue Modelle realer Objekte zu erstellen. Auf der Grundlage dieser Konstruktionen werden wir untersuchen, wie sich der Bereich der 3D-Objekterkennung und der Posensch盲tzung verbessern l盲sst, wobei insbesondere die Skalierbarkeit im Vordergrund steht, d.h. das Problem der gleichzeitigen Bearbeitung mehrerer Objekte
    corecore