7 research outputs found

    Visual Monocular Obstacle Avoidance for Small Unmanned Vehicles

    Get PDF
    This paper presents and extensively evaluates a visual obstacle avoidance method using frames of a single camera, intended for application on small devices (ground or aerial robots or even smartphones). It is based on image region classification using so called relative focus maps, it does not require a priori training, and it is applicable in both indoor and outdoor environments, which we demonstrate through evaluations using both simulated and real data

    Obstacle avoidance and distance measurement for unmanned aerial vehicles using monocular vision

    Get PDF
    Unmanned Aerial Vehicles or commonly known as drones are better suited for "dull, dirty, or dangerous" missions than manned aircraft. The drone can be either remotely controlled or it can travel as per predefined path using complex automation algorithm built during its development. In general, Unmanned Aerial Vehicle (UAV) is the combination of Drone in the air and control system on the ground. Design of an UAV means integrating hardware, software, sensors, actuators, communication systems and payloads into a single unit for the application involved. To make it completely autonomous, the most challenging problem faced by UAVs is obstacle avoidance. In this paper, a novel method to detect frontal obstacles using monocular camera is proposed. Computer Vision algorithms like Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Feature (SURF) are used to detect frontal obstacles and then distance of the obstacle from camera is calculated. To meet the defined objectives, designed system is tested with self-developed videos which are captured by DJI Phantom 4 pro

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    CNN-Based Vision Model for Obstacle Avoidance of Mobile Robot

    Full text link

    How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

    Full text link
    This work explores the feasibility of steering a drone with a (recurrent) neural network, based on input from a forward looking camera, in the context of a high-level navigation task. We set up a generic framework for training a network to perform navigation tasks based on imitation learning. It can be applied to both aerial and land vehicles. As a proof of concept we apply it to a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a room containing a number of obstacles. So far only feedforward neural networks (FNNs) have been used to train UAV control. To cope with more complex tasks, we propose the use of recurrent neural networks (RNN) instead and successfully train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision based control is a sequential prediction problem, known for its highly correlated input data. The correlation makes training a network hard, especially an RNN. To overcome this issue, we investigate an alternative sampling method during training, namely window-wise truncated backpropagation through time (WW-TBPTT). Further, end-to-end training requires a lot of data which often is not available. Therefore, we compare the performance of retraining only the Fully Connected (FC) and LSTM control layers with networks which are trained end-to-end. Performing the relatively simple task of crossing a room already reveals important guidelines and good practices for training neural control networks. Different visualizations help to explain the behavior learned.Comment: 12 pages, 30 figure

    Ffau—framework for fully autonomous uavs

    Get PDF
    Nr. 024539 (POCI-01-0247-FEDER-024539) under grant agreement No 783221 UID/EEA/00066/2019Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently gained a prominent role in many industries being widely used not only among enthusiastic consumers, but also in high demanding professional situations, and will have a massive societal impact over the coming years. However, the operation of UAVs is fraught with serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or randomly thrown objects). These collision scenarios are complex to analyze in real-time, sometimes being computationally impossible to solve with existing State of the Art (SoA) algorithms, making the use of UAVs an operational hazard and therefore significantly reducing their commercial applicability in urban environments. In this work, a conceptual framework for both stand-alone and swarm (networked) UAVs is introduced, with a focus on the architectural requirements of the collision avoidance subsystem to achieve acceptable levels of safety and reliability. The SoA principles for collision avoidance against stationary objects are reviewed and a novel approach is described, using deep learning techniques to solve the computational intensive problem of real-time collision avoidance with dynamic objects. The proposed framework includes a web-interface allowing the full control of UAVs as remote clients with a supervisor cloud-based platform. The feasibility of the proposed approach was demonstrated through experimental tests using a UAV, developed from scratch using the proposed framework. Test flight results are presented for an autonomous UAV monitored from multiple countries across the world.publishersversionpublishe

    Perchage automatique de drones basé sur la vision artificielle

    Get PDF
    RÉSUMÉ L'utilisation de l'intelligence artificielle et de la vision se développe considérablement dans l'industrie des drones, notamment pour la saisie ou le dépôt d'objets ou encore pour l'atterrissage. Le perchage de drone, étroitement lié à ces tâches, commence également à se développer. Cette faculté permettrait aux drones de réaliser de nouvelles tâches mais aussi de combler leurs inconvénients tels que leur faible durée de vol ou le fait qu'ils soient fragiles. Par exemple, cela assurerait à un drone de se poser en cas de fin de batterie ou de mauvaises conditions climatiques. Les techniques actuelles de perchage ou de saisie d'objet effectuées à partir de la vision artificielle et d'un système de préhension ajouté au drone, se basent seulement sur la détection d'objets. Les objets/supports, lors des tests, sont sélectionnés en avance par les chercheurs afin d'avoir une bonne concordance avec le préhenseur. Ainsi, dans le cas où le support possède une forme complexe ou encore des dimensions trop différentes par rapport à celles du préhenseur, le drone le détectera et essayera de s'y percher sans succès. L'objectif de cette maîtrise recherche est de développer un système de détection d'objets, par vision par ordinateur, qui selon les caractéristiques du préhenseur du drone, détecte les objets, leur attribue un score de concordance et renvoie le support idéal. Le score de concordance, que nous avons établi et nommé "CSP" (Concordance Support-Préhenseur) dans ce mémoire, se détermine à partir de la comparaison entre l'intervalle d'ouverture du préhenseur et les dimensions réelles des objets. La réalisation d'une comparaison ainsi que l'utilisation d'un score de concordance pour la détermination d'un support adéquat est une procédure que nous avons élaborée. Partant d'un algorithme de détection basé sur la classification et la segmentation des objets détectés, la solution proposée dans ce mémoire a été développée en 3 phases : 1. Entrainement supervisé du réseau de neurones de l'algorithme de détection pour de nouvelles classes d'objets adaptées au perchage de drone. 2. Conception d'un algorithme, permettant de réaliser une comparaison entre les dimensions des objets détectés et celles du système de préhension dans le but de déterminer le support idéal pour le perchage du drone. 3. Évaluation des performances du modèle de détection complet, regroupant les deux algorithmes, à partir de tests réalisés sur un ensemble d'objets avec des paramètres et des conditions d'environnement différents entre chaque test. Cette évaluation a démontré la précision et la fiabilité de notre système dans la détermination du support idéal à partir de la vision artificielle. Nous avons effectué un ensemble de 36 tests avec pour chaque test un paramétrage différent. De plus, chaque test a été effectué avec un nombre de 10 répétitions soit avec une configuration des paramètres identique dans le but d'avoir des résultats plus fiables. Ces tests ont été réalisés avec une caméra ayant une résolution 640 x 480 et une carte graphique GTX 1080Ti (GPU). La performance globale de notre système obtient un taux de succès de 84.17 % dans la détermination du support idéal. Ce taux de succès atteint même les 100 % dans le cas d'une différence de diamètre d'au moins 20 mm entre les objets détectés. La vitesse d'exécution du modèle, dans le cas d'un seul objet détecté par image, prend en moyenne 0.42 seconde pour l'analyse d'une image, ce qui correspond à 2.38 fps. Le fait d'obtenir un temps d'exécution efficace dans le traitement de chaque objet permet ainsi de garder une fluidité dans la détection lors du déplacement de la caméra et ainsi d'éviter les mauvaises détections et/ou mauvaises segmentations.----------ABSTRACT The use of artificial vision and artificial intelligence in drones applications is experiencing rapid growth, particularly in pick and drop applications but also in drone landing. Drone perching, being closely linked to applications stated above, has also begun to emerge. This research would allow the drones to realize new tasks but also to mitigate their disadvantages such as their short flight time or their fragility. Current perching and object grasping approaches are carried out by utilizing artificial vision and robotic grippers integrated with the drones. The target objects or their handles are chosen in a way to match the geometry of the gripper. Thus, in case where supports have complex form or dimensions too differents in comparison with gripper dimensions, perching won't be avaiblable. The purpose of this study is to develop a detection system that uses computer vision to return ideal support for drone perching. The ideal support corresponds to the object with the best matching score. The matching score, that we have developed and called "CSP" (Concordance Support-Préhenseur) in this study, is determined from the comparison between the gripper opening and the real object dimension. The implementation of this comparison as well as the utilization of matching score to the determination of the best support is a method that we created. The suggested solution in this study is based on a detection algorithm which uses the classification and segmentation of objects. The several stages of the uses method are enumerated as follows : 1. Supervised training of neural network detection algorithm for new objects classes adapted to drone perching. 2. Conception of an algorithm, allowing to realize a comparison between object dimensions and grasping system dimensions in order to determine the ideal support. 3. Evaluation of the global system performances thanks to tests performed on a set of objects. This evaluation has shown the precision and the reliability of our system in the ideal support determination based on artificial vision. We used a set of 36 tests with, for each test, different parameter values and different environmental conditions. Moreover, each test has been performed with a number of 10 repetitions for a given configuration in the purpose to obtain more reliable results. These tests have been made using a camera with a resolution of 640 x 480 and a graphic processor unit GTX 1080Ti (GPU). The global performance of our system obtains a success rate of 84.17 % in the ideal support determination. This success rate even reaches 100 % in the case where the difference between the diameter of each object is at least 20 mm. The execution speed of this model, with only one detected object by image, takes on average 0.42 second for the analysis of one image, which corresponds to 2.38 fps. To maintain fluidity and thus to avoid wrong detections and/or segmentations during camera motion, an efficient execution time is needed in the treatment of each object
    corecore