11 research outputs found

    Obstacle detection technique using multi sensor integration for small unmanned aerial vehicle

    Get PDF
    Achieving a robust obstacle detection system for small UAV is very challenging. Due to size and weight constraints, very limited detection sensors can be equipped in the system. Prior works focused on a single sensing device which is either camera or range sensors based. However, these sensors have their own advantages and disadvantages in detecting the appearance of the obstacles. In this paper, combination of both sensors based is proposed for a small UAV obstacle detection system. A small Lidar sensor is used as the initial detector and queue for image capturing by the camera. Next, SURF algorithm is applied to find the obstacle sizes estimation by searching the connecting feature points in the image frame. Finally, safe avoidance path for UAV is determined through the exterior feature points from the estimated width of the obstacle. The proposed method was evaluated by conducting experiments in real time with indoor environment. In the experiment conducted, we successfully detect and determine a safe avoidance path for the UAV on 6 different sizes and textures of the obstacles including textureless obstacle

    Controlling docking, altitude and speed in a circular high-roofed tunnel thanks to the optic flow

    No full text
    International audienceThe new robot called BeeRotor we have developed is a tandem rotorcraft that mimicks optic flow-based behaviors previously observed in flies and bees. This tethered miniature robot (80g), which is autonomous in terms of its computational power requirements, is equipped with a 13.5-g quasi-panoramic visual system consisting of 4 individual visual motion sensors responding to the optic flow generated by photographs of natural scenes, thanks to the bio-inspired "time of travel" scheme. Based on recent findings on insects' sensing abilities and control strategies, the BeeRotor robot was designed to use optic flow to perform complex tasks such as ground and ceiling following while also automatically driving its forward speed on the basis of the ventral or dorsal optic flow. In addition, the BeeRotor robot can perform tricky manoeuvers such as automatic ceiling docking by simply regulating its dorsal or ventral optic flow in high-roofed tunnel depicting natural scenes. Although it was built as a proof of concept, the BeeRotor robot is one step further towards achieving a fully- autonomous micro-helicopter which is capable of navigating mainly on the basis of the optic flow

    Obstacle avoidance based-visual navigation for micro aerial vehicles

    Get PDF
    This paper describes an obstacle avoidance system for low-cost Unmanned Aerial Vehicles (UAVs) using vision as the principal source of information through the monocular onboard camera. For detecting obstacles, the proposed system compares the image obtained in real time from the UAV with a database of obstacles that must be avoided. In our proposal, we include the feature point detector Speeded Up Robust Features (SURF) for fast obstacle detection and a control law to avoid them. Furthermore, our research includes a path recovery algorithm. Our method is attractive for compact MAVs in which other sensors will not be implemented. The system was tested in real time on a Micro Aerial Vehicle (MAV), to detect and avoid obstacles in an unknown controlled environment; we compared our approach with related works.Peer ReviewedPostprint (published version

    A two-directional 1-gram visual motion sensor inspired by the fly's eye

    No full text
    International audienceOptic flow based autopilots for Micro-Aerial Vehicles (MAVs) need lightweight, low-power sensors to be able to fly safely through unknown environments. The new tiny 6-pixel visual motion sensor presented here meets these demanding requirements in term of its mass, size and power consumption. This 1-gram, low-power, fly-inspired sensor accurately gauges the visual motion using only this 6-pixel array with two different panoramas and illuminance conditions. The new visual motion sensor's output results from a smart combination of the information collected by several 2-pixel Local Motion Sensors (LMSs), based on the \enquote{time of travel} scheme originally inspired by the common housefly's Elementary Motion Detector (EMD) neurons. The proposed sensory fusion method enables the new visual sensor to measure the visual angular speed and determine the main direction of the visual motion without any prior knowledge. By computing the median value of the output from several LMSs, we also ended up with a more robust, more accurate and more frequently refreshed measurement of the 1-D angular speed

    Object detection technique for small unmanned aerial vehicle

    Get PDF
    Obstacle detection and avoidance is desirable for UAVs especially lightweight micro aerial vehicles and is challenging problem since it has payload constraints, therefore only limited sensor can be attached the vehicle. Usually the sensors incorporated in the system is either type vision based (monocular or stereo camera) or Laser based. However, each of the sensor has its own advantage and disadvantage, thus we built the obstacle detection and avoidance system based multi sensor (monocular sensor and LIDAR) integration. On top of that, we also combine SURF algorithm with Harris corner detector to determine the approximate size of the obstacles. In the initial experiment conducted, we successfully detect and determine the size of the obstacles with 3 different obstacles. The differences of length between real obstacles and our algorithm are considered acceptable which is about -0.4 to 3.6

    Optimizacija upravljanja brzinom mobilnog robota s izbjegavanjem prepreka zasnovana na teoriji vijabilnosti

    Get PDF
    The navigation efficiency of wheeled robots needs to be further improved. Although related research has proposed various approaches, most of them describe the relationship between the robot and the obstacle roughly. Viability theory concerns the dynamic adaptation of evolutionary systems to the environment. Based on viability, we explore a method that involves robot dynamic model, environmental constraints and navigation control. The method can raise the efficiency of the navigation. We treat the environment as line segments to reduce the computational difficulty for building the viability condition constraints. Although there exists lots of control values which can be used to drive the robot safely to the goal, it is necessary to build an optimization model to select a more efficient control value for the navigation. Our simulation shows that viability theory can precisely describe the link between robotic dynamics and the obstacle, and thus can help the robot to achieve radical high speed navigation in an unknown environment.Postoji potreba za unaprijeđenjem učinkovitosti navigacije mobilnih robota. Iako su vezana istraživanja predložila različite pristupe, većina ne opisuje precizno odnos između robota i prepreke. Teorija vijabilnosti istražuje dinamičke adaptacije evolucijskih sustava njihovoj okolini. U članku istražujemo metodu koja može povećati učinkovitost navigacije, zasnovanu na vijabilnosti koja uključuje dinamički model robota, ograničenja okoline robota i samu navigaciju. Radna okolina predstavljena je ravnim crtama kako bi se smanjila računska složenost izgradnje ograničenja. Iako postoji veliki broj iznosa upravljačkih veličina koje bi sigurno uputile robota prema cilju, potrebno je izraditi optimizacijski model koji bi odabrao učinkovitiju upravljačku vrijednost za navigaciju. Izvedene simulacije pokazuju da teorija vijabilnosti može precizno opisati vezu između prepreke i dinamike robota te na taj način pomoći robotu da postigne radikalno veće brzine pri navigaciji u nepoznatim prostorima

    Spatial combination of sensor data deriving from mobile platforms for precision farming applications

    Get PDF
    This thesis combines optical sensors on a ground and on an aerial platform for field measurements in wheat, to identify nitrogen (N) levels, estimating biomass (BM) and predicting yield. The Multiplex Research (MP) fluorescence sensor was used for the first time in wheat. The individual objectives were: (i) Evaluation of different available sensors and sensor platforms used in Precision Farming (PF) to quantify the crop nutrition status, (ii) Acquisition of ground and aerial sensor data with two ground spectrometers, an aerial spectrometer and a ground fluorescence sensor, (iii) Development of effective post-processing methods for correction of the sensor data, (iv) Analysis and evaluation of the sensors with regard to the mapping of biomass, yield and nitrogen content in the plant, and (v) Yield simulation as a function of different sensor signals. This thesis contains three papers, published in international peer-reviewed journals. The first publication is a literature review on sensor platforms used in agricultural research. A subdivision of sensors and their applications was done, based on a detailed categorization model. It evaluates strengths and weaknesses, and discusses research results gathered with aerial and ground platforms with different sensors. Also, autonomous robots and swarm technologies suitable for PF tasks were reviewed. The second publication focuses on spectral and fluorescence sensors for BM, yield and N detection. The ground sensors were mounted on the Hohenheim research sensor platform Sensicle. A further spectrometer was installed in a fixed-wing Unmanned Aerial Vehicle (UAV). In this study, the sensors of the Sensicle and the UAV were used to determine plant characteristics and yield of three-year field trials at the research station Ihinger Hof, Renningen (Germany), an institution of the University of Hohenheim, Stuttgart (Germany). Winter wheat (Triticum aestivum L.) was sown on three research fields, with different N levels applied to each field. The measurements in the field were geo-referenced and logged with an absolute GPS accuracy of ±2.5 cm. The GPS data of the UAV was corrected based on the pitch and roll position of the UAV at each measurement. In the first step of the data analysis, raw data obtained from the sensors was post-processed and was converted into indices and ratios relating to plant characteristics. The converted ground sensor data were analysed, and the results of the correlations were interpreted related to the dependent variables (DV) BM weight, wheat yield and available N. The results showed significant positive correlations between the DVs and the Sensicle sensor data. For the third paper, the UAV sensor data was included into the evaluations. The UAV data analysis revealed low significant results for only one field in the year 2011. A multirotor UAV was considered as a more viable aerial platform, that allows for more precision and higher payload. Thereby, the ground sensors showed their strength at a close measuring distance to the plant and a smaller measurement footprint. The results of the two ground spectrometers showed significant positive correlations between yield and the indices from CropSpec, NDVI (Normalised Difference Vegetation Index) and REIP (Red-Edge Inflection Point). Also, FERARI and SFR (Simple Fluorescence Ratio) of the MP fluorescence sensor were chosen for the yield prediction model analysis. With the available N, CropSpec and REIP correlated significantly. The BM weight correlated with REIP even at a very early growing stage (Z 31), and with SAVI (Soil-Adjusted Vegetation Index) at ripening stage (Z 85). REIP, FERARI and SFR showed high correlations to the available N, especially in June and July. The ratios and signals of the MP sensor were highly significant compared to the BM weight above Z 85. Both ground spectrometers are suitable for data comparison and data combination with the active MP fluorescence sensor. Through a combination of fluorescence ratios and spectrometer indices, linear models for the prediction of wheat yield were generated, correlating significantly over the course of the vegetative period for research field Lammwirt (LW) in 2012. The best model for field LW in 2012 was selected for cross-validation with the measurements of the fields Inneres Täle (IT) and Riech (RI) in 2011 and 2012. However, it was not significant. By exchanging only one spectral index with a fluorescence ratio in a similar linear model, it showed significant correlations. This work successfully proves the combination of different sensor ratios and indices for the detection of plant characteristics, offering better and more robust predictions and quantifications of field parameters without employing destructive methods. The MP sensor proved to be universally applicable, showing significant correlations to the investigated characteristics such as BM weight, wheat yield and available N.Diese Arbeit kombiniert optische Sensoren auf einer Sensorplattform (SPF) am Boden und in der Luft bei Messungen in Weizen, um die Stickstoff-(N)-Werte zu identifizieren, während gleichzeitig die Biomasse (BM) geschätzt und der Ertrag vorhergesagt wird. Erstmals wurde hierfür der Fluoreszenzsensor Multiplex Research (MP) in Weizen eingesetzt. Die Ziele dieser Dissertation umfassen: (i) Bewertung verfügbarer Sensoren und SPF, die in der Präzisionslandwirtschaft zur Quantifizierung des Ernährungszustandes von Nutzpflanzen verwendet werden, (ii) Erfassung von Daten mit zwei Spektrometern am Boden, einem Spektrometer auf einem Modellflugzeug (UAV) und einem Fluoreszenzsensor am Boden, (iii) Erstellung effektiver Nachbearbeitungsmethoden für die Datenkorrektur, (iv) Analyse und Evaluation der Sensoren für die Abbildung der BM, des Ertrags und des N-Gehaltes in der Pflanze, und (v) Ertragssimulation als Funktion von Merkmalen unterschiedlicher Sensorsignale. Diese Arbeit enthält drei Artikel, die in international begutachteten Fachzeitschriften publiziert wurden. Die erste Veröffentlichung ist eine Literaturrecherche über SPF in der Agrarforschung. Ein detailliertes Kategorisierungsmodell wird für eine allgemeine Unterteilung der Sensoren und deren Anwendungsgebiete herangenommen, die Stärken und Schwächen bewertet, und die Forschungsergebnisse von Luft- und Bodenplattformen mit unterschiedlicher Sensorik diskutiert. Außerdem werden autonome Roboter und für landwirtschaftliche Aufgaben geeignete Schwarmtechnologien beschrieben. Die zweite Publikation fokussiert sich auf Spektral- und Fluoreszenzsensoren für die Erfassung von BM, Ertrag und N. In der Arbeit wurden die Bodensensoren auf der Hohenheimer Forschungs-SPF Sensicle und der Sensor auf dem UAV in dreijährigen Feldversuchen auf der Versuchsstation Ihinger Hof der Universität Hohenheim in Renningen für die Bestimmung von Pflanzenmerkmalen und des Ertrags eingesetzt. Auf drei Versuchsfeldern wurde Winterweizen ausgesät, und in einem randomisierten Versuchsdesign unterschiedliche N-Düngestufen angelegt. Die Sensormessungen im Feld wurden mit einer absoluten GPS Genauigkeit von ±2,5 cm verortet. Die GPS Daten des UAVs wurden mittels der Nick- und Rollposition lagekorrigiert. Im ersten Schritt der Datenanalyse wurden die Sensorrohdaten nachbearbeitet und in Indizes und Ratios umgerechnet. Die Bodensensordaten wurden analysiert, und die Ergebnisse der Korrelationen in Bezug zu den abhängigen Variablen (DV) BM-Gewicht, Weizenertrag, verfügbarer sowie aufgenommener N dargestellt. Die Ergebnisse zeigen signifikant positive Korrelationen zwischen den DVs und den Sensicle-Sensordaten. Für die dritte Publikation wurden die Sensordaten des UAV in die Auswertungen miteinbezogen. Die Analyse der UAV Daten zeigte niedrige signifikante Ergebnisse für nur ein Feld im Versuchsjahr 2011. Ein Multikopter wird als zuverlässigere Luftplattform erachtet, der mehr Präzision und eine höhere Nutzlast ermöglicht. Die Sensoren auf dem Sensicle zeigten ihren Vorteil bedingt durch einen kürzeren Messabstand zur Pflanze und eine kleinere Messfläche. Die Ergebnisse der beiden Sensicle-Spektrometer zeigten signifikant positive Korrelationen zwischen dem Ertrag und den Indizes von CropSpec, NDVI (Normalised Difference Vegetation Index) und REIP (Red-Edge Inflection Point). Auch FERARI und SFR (Simple Fluorescence Ratio) des MP-Sensors wurden für die Analyse des Ertragsvorhersagemodells ausgewählt. Mit dem verfügbaren N korrelierten CropSpec und REIP hochsignifikant. Das BM-Gewicht korrelierte bereits ab einem sehr frühen Wachstumsstadium (Z31) mit REIP und im Reifestadium (Z85) mit SAVI (Soil-Adjusted Vegetation Index). REIP, FERARI und SFR zeigten hohe Korrelationen mit dem verfügbaren N, insbesondere im Juni und Juli. Die Ratios und Signale des MP Sensors sind vor allem ab Z85 gegenüber dem BM-Gewicht hochsignifikant. Durch eine Kombination von Fluoreszenzwerten und Spektrometerindizes wurden lineare Modelle zur Vorhersage des Weizenertrags erstellt, die im Verlauf der Vegetationsperiode für das Versuchsfeld Lammwirt (LW) im Jahr 2012 signifikant korrelierten. Das beste Modell für das Feld LW im Jahr 2012 wurde für die Kreuzvalidierung mit den Messungen der Versuchsfelder Inneres Täle (IT) und Riech (RI) in den Jahren 2011 und 2012 ausgewählt. Sie waren nicht signifikant, jedoch zeigten sich durch den Austausch nur eines Spektralindexes mit einem Fluoreszenzratio in einem ähnlichen linearen Modell signifikante Korrelationen. Die vorliegende Arbeit zeigt erfolgreich, dass sich die Kombination verschiedener Sensorwerte und Sensorindizes zur Erkennung von Pflanzenmerkmalen gut eignet, und ohne den Einsatz destruktiver Methoden die Möglichkeit für bessere und robustere Vorhersagen bietet. Vor allem der MP-Fluoreszenzsensor erwies sich als universell einsetzbarer Sensor, der signifikante Korrelationen zu den untersuchten Merkmalen BM-Gewicht, Weizenertrag und verfügbarem N aufzeigte

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver
    corecore