1,478 research outputs found

    High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments

    Get PDF
    This thesis presents the development of an aerial robotic testbed based on Robot Operating System (ROS). The purpose of this high-performance testbed is to develop a system capable of performing robust navigation tasks using vision tools such as a stereo camera. While ensuring the computation of robot odometery, the system is also capable of sensing the environment using the same stereo camera. Hence, all the navigation tasks are performed using a stereo camera and an inertial measurement unit (IMU) as the main sensor suite. ROS is used as a framework for software integration due to its capabilities to provide efficient communication and sensor interfaces. Moreover, it also allows us to use C++ which is efficient in performance especially on embedded platforms. Combining together ROS and C++ provides the necessary computation efficiency and tools to handle fast, real-time image processing and planning which are the vital parts of navigation and obstacle avoidance on such scale. The main application of this work revolves around proposing a real-time and efficient way to demonstrate vision-based navigation in UAVs. The proposed approach is developed for a quadrotor UAV which is capable of performing defensive maneuvers in case any obstacles are in its way, while constantly moving towards a user-defined final destination. Stereo depth computation adds a third axis to a two dimensional image coordinate frame. This can be referred to as the depth image space or depth image coordinate frame. The idea of planning in this frame of reference is utilized along with certain precomputed action primitives. The formulation of these action primitives leads to a hybrid control law for feasible trajectory generation. Further, a proof of stability of this system is also presented. The proposed approach keeps in view the fact that while performing fast maneuvers and obstacle avoidance simultaneously, many of the standard optimization approaches might not work in real-time on-board due to time and resource limitations. This leads to a need for the development of real-time techniques for vision-based autonomous navigation

    INTELLIGENT VISION-BASED NAVIGATION SYSTEM

    Get PDF
    This thesis presents a complete vision-based navigation system that can plan and follow an obstacle-avoiding path to a desired destination on the basis of an internal map updated with information gathered from its visual sensor. For vision-based self-localization, the system uses new floor-edges-specific filters for detecting floor edges and their pose, a new algorithm for determining the orientation of the robot, and a new procedure for selecting the initial positions in the self-localization procedure. Self-localization is based on matching visually detected features with those stored in a prior map. For planning, the system demonstrates for the first time a real-world application of the neural-resistive grid method to robot navigation. The neural-resistive grid is modified with a new connectivity scheme that allows the representation of the collision-free space of a robot with finite dimensions via divergent connections between the spatial memory layer and the neuro-resistive grid layer. A new control system is proposed. It uses a Smith Predictor architecture that has been modified for navigation applications and for intermittent delayed feedback typical of artificial vision. A receding horizon control strategy is implemented using Normalised Radial Basis Function nets as path encoders, to ensure continuous motion during the delay between measurements. The system is tested in a simplified environment where an obstacle placed anywhere is detected visually and is integrated in the path planning process. The results show the validity of the control concept and the crucial importance of a robust vision-based self-localization process

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Autonomous Navigation of Mobile Robot Using Modular Architecture for Unstructured Environment

    Get PDF
    This article proposes a solution for autonomous navigation of mobile robot based on distributed control architecture. In this architecture, each stage of the algorithm is divided into separate software modules capable of interfacing to each other to obtain an effective global solution. The work starts with selection of suitable sensors depending on their requirement for the purpose and for the present work a stereo vision module and a laser range finder are used. These sensors are integrated with the robot controller via Ethernet/USB and the sensory feedbacks are used to control and navigate the robot. Using the architecture, an algorithm has been developed and implemented to intelligently avoid dynamic obstacles and optimally re-planning the path to reach the target location. The algorithm has been successfully tested with a Summit_XL mobile robot. The thesis describing the present research work is divided into eight chapters. The subject of the topic its contextual relevance and the related matters including the objectives of the work are presented in Chapter 1. The reviews on several diverse streams of literature on different issues of the topic such as autonomous navigation using various combinations of sensors networks, SLAM, obstacle detection and avoidance etc. are presented in Chapter 2. In Chapter 3, selected methodologies are explained. Chapter 4 presents the detail description of the sensors, automobile platform and software tools used to implement the developed methodology. In Chapter 5, detail view of the experimental setup is provided. Procedures and parametric evaluations are given in chapter 6. Successful indoor tests results are described in chapter 7. Finally, Chapter 8 presents the conclusion and future scope of the research work

    Implementation of target tracking in Smart Wheelchair Component System

    Get PDF
    Independent mobility is critical to individuals of any age. While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. This population includes, but is not limited to, individuals with low vision, visual field neglect, spasticity, tremors, or cognitive deficits. To meet the needs of this population, our group is involved in developing cost effective modularly designed Smart Wheelchairs. Our objective is to develop an assistive navigation system which will seamlessly integrate into the lifestyle of individual with disabilities and provide safe and independent mobility and navigation without imposing an excessive physical or cognitive load. The Smart Wheelchair Component System (SWCS) can be added to a variety of commercial power wheelchairs with minimal modification to provide navigation assistance. Previous versions of the SWCS used acoustic and infrared rangefinders to identify and avoid obstacles, but these sensors do not lend themselves to many desirable higher-level behaviors. To achieve these higher level behaviors we integrated a Continuously Adapted Mean Shift (CAMSHIFT) target tracking algorithm into the SWCS, along with the Minimal Vector Field Histogram (MVFH) obstacle avoidance algorithm. The target tracking algorithm provides the basis for two distinct operating modes: (1) a "follow-the-leader" mode, and (2) a "move to stationary target" mode.The ability to track a stationary or moving target will make smart wheelchairs more useful as a mobility aid, and is also expected to be useful for wheeled mobility training and evaluation. In addition to wheelchair users, the caregivers, clinicians, and transporters who provide assistance to wheelchair users will also realize beneficial effects of providing safe and independent mobility to wheelchair users which will reduce the level of assistance needed by wheelchair users

    Collision Avoidance on Unmanned Aerial Vehicles using Deep Neural Networks

    Get PDF
    Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries, being widely used not only among enthusiastic consumers but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is full of serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, focusing on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. First, the SoA principles for collision avoidance against stationary objects are reviewed. Afterward, a novel image processing approach that uses deep learning and optical flow is presented. This approach is capable of detecting and generating escape trajectories against potential collisions with dynamic objects. Finally, novel models and algorithms combinations were tested, providing a new approach for the collision avoidance of UAVs using Deep Neural Networks. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, created from scratch using the framework developed.Os veículos aéreos não tripulados (VANTs), embora dificilmente considerados uma nova tecnologia, ganharam recentemente um papel de destaque em muitas indústrias, sendo amplamente utilizados não apenas por amadores, mas também em situações profissionais de alta exigência, sendo expectável um impacto social massivo nos próximos anos. No entanto, a operação de VANTs está repleta de sérios riscos de segurança, como colisões com obstáculos dinâmicos (pássaros, outros VANTs ou objetos arremessados). Estes cenários de colisão são complexos para analisar em tempo real, às vezes sendo computacionalmente impossível de resolver com os algoritmos existentes, tornando o uso de VANTs um risco operacional e, portanto, reduzindo significativamente a sua aplicabilidade comercial em ambientes citadinos. Neste trabalho, uma arquitectura conceptual para VANTs autônomos e em rede é apresentada, com foco nos requisitos arquitetônicos do subsistema de prevenção de colisão para atingir níveis aceitáveis de segurança e confiabilidade. Os estudos presentes na literatura para prevenção de colisão contra objectos estacionários são revistos e uma nova abordagem é descrita. Esta tecnica usa técnicas de aprendizagem profunda e processamento de imagem, para realizar a prevenção de colisões em tempo real com objetos móveis. Por fim, novos modelos e combinações de algoritmos são propostos, fornecendo uma nova abordagem para evitar colisões de VANTs usando Redes Neurais Profundas. A viabilidade da abordagem foi demonstrada através de testes experimentais utilizando um VANT, desenvolvido a partir da arquitectura apresentada

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur Unterstützung des alltäglichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in Häusern, Büros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und präzise, zuverlässige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbständig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusätzlichen Bezugsinformationen vorhanden sind, das Gelände erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewählten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nämlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere Qualität der dreidimensionalen Erkundung der Umgebung Details und Strukturen der Oberflächen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit präsentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur Poseschätzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens präsentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben übernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten für die Tiefendaten, sondern werden auch für die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore