3,668 research outputs found

    Transfer Learning-Based Crack Detection by Autonomous UAVs

    Full text link
    Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program

    Detección y evasión de obstáculos usando redes neuronales híbridas convolucionales y recurrentes

    Full text link
    [ES] Los términos "detección y evasión" hacen referencia al requerimiento esencial de un piloto para "ver y evitar" colisiones aire-aire. Para introducir UAVs en el día a día, esta funcion del piloto debe ser replicada por el UAV. En pequeños UAVs como pueden ser los destinados a la entrega de pedidos, existen ciertos aspectos limitantes en relación a tamaño, peso y potencia, por lo que sistemas cooperativos como TCAS o ADS-B no pueden ser utilizados y en su lugar otros sistemas como cámaras electro-ópticas son candidatos potenciales para obtener soluciones efectivas. En este tipo de aplicaciones, la solución debe evitar no solo otras aeronaves sino también otros obstáculos que puedan haber cerca de la superficie donde probablemente se operará la mayoría del tiempo. En este proyecto se han utilizado redes neuronales híbridas que incluyen redes neuronales convolucionales como primera etapa para clasificar objetos y redes neuronales recurrentes a continuación para deteminar la secuencia de eventos y actuar consecuentemente. Este tipo de red neuronal es muy actual y no se ha investigado en exceso hasta la fecha, por lo que el principal objetivo del proyecto es estudiar si podrían ser aplicadas en sistemas de "detección y evasión". Algoritmos de acceso libre han sido fusionados y mejorados para crear un nuevo modelo capaz de funcionar en este tipo de aplicaciones. A parte del algoritmo de detección y seguimiento, la parte correspondiente a la evasión de colisiones también fue desarrollada. Un filtro Kalman extendido se utilizó para estimar el rango relativo entre un obstáculo y el UAV. Para obtener una resolución sobre la posibilidad de conflicto, una aproximación estocástica fue considerada. Finalmente, una maniobra de evasión geométrica fue diseñada para utilizar si fuera necesario. Esta segunda parte fue evaluada mediante una simulación que también fue creada para el proyecto. Adicionalmente, un ensayo experimental se llevó a cabo para integrar las dos partes del algoritmo. Datos del ruido de la medida fueron experimentalmente obtenidos y se comprobó que las colisiones se podían evitar satisfactoriamente con dicho valor. Las principales conclusiones fueron que este nuevo tipo funciona más rápido que los métodos basados en redes neuronales más comunes, por lo que se recomiendo seguir investigando en ellas. Con la técnica diseñada, se encuentran disponibles multiples parámetros de diseño que pueden ser adaptados a diferentes circumstancias y factores. Las limitaciones principales encontradas se centran en la detección de obstáculos y en la estimación del rango relativo, por lo que se sugiere que la futura investigación se dirija en estas direcciones.[EN] A Sense and Avoid technique has been developed in this master thesis. A special method for small UAVs which use only an electro-optical camera as the sensor has been considered. This method is based on a sophisticated processing solution using hybrid Convolutional and Recurrent Neural Networks. The aim is to study the feasibility of this kind of neural networks in Sense and Avoid applications. First, the detection and tracking part of the algorithm is presented. Two models were used for this purpose: a Convolutional Neural Network called YOLO and a hybrid Convolutional and Recurrent Neural Network called Re3. After that, the collision avoidance part was designed. This consisted of the obstacle relative range estimation using an Extended Kalman Filter, the conflict probability calculation using an analytical approach and the geometric avoidance manoeuvre generation. Both parts were assessed separately by videos and simulations respectively, and then an experimental test was carried out to integrate them. Measurement noise was experimentally tested and simulations were performed again to check that collisions were avoided with the considered detection and tracking approach. Results showed that the considered approach can track objects faster than the most common computer vision methods based on neural networks. Furthermore, the conflict was successfully avoided with the proposed technique. Design parameters were allowed to adjust speed and maneuvers accordingly to the expected environment or the required level of safety. The main conclusion was that this kind of neural network could be successfully applied to Sense and Avoid systems.Vidal Navarro, D. (2018). Sense and avoid using hybrid convolutional and recurrent neural networks. Universitat Politècnica de València. http://hdl.handle.net/10251/142606TFG

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Fusion Based Safety Application for Pedestrian Detection with Danger Estimation

    Get PDF
    Proceedings of: 14th International Conference on Information Fusion (FUSION 2011). Chicago, Illinois, USA 5-8 July 2011.Road safety applications require the most reliable data. In recent years data fusion is becoming one of the main technologies for Advance Driver Assistant Systems (ADAS) to overcome the limitations of isolated use of the available sensors and to fulfil demanding safety requirements. In this paper a real application of data fusion for road safety for pedestrian detection is presented. Two sets of automobile-emplaced sensors are used to detect pedestrians in urban environments, a laser scanner and a stereovision system. Both systems are mounted in the automobile research platform IVVI 2.0 to test the algorithms in real situations. The different safety issues necessary to develop this fusion application are described. Context information such as velocity and GPS information is also used to provide danger estimation for the detected pedestrians.This work was supported by the Spanish Government through the Cicyt projects FEDORA (GRANT TRA2010- 20225-C03-01 ) , VIDAS-Driver (GRANT TRA2010-21371-C03-02 ).Publicad

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure
    • …
    corecore