79 research outputs found

    Grasping, Perching, And Visual Servoing For Micro Aerial Vehicles

    Get PDF
    Micro Aerial Vehicles (MAVs) have seen a dramatic growth in the consumer market because of their ability to provide new vantage points for aerial photography and videography. However, there is little consideration for physical interaction with the environment surrounding them. Onboard manipulators are absent, and onboard perception, if existent, is used to avoid obstacles and maintain a minimum distance from them. There are many applications, however, which would benefit greatly from aerial manipulation or flight in close proximity to structures. This work is focused on facilitating these types of close interactions between quadrotors and surrounding objects. We first explore high-speed grasping, enabling a quadrotor to quickly grasp an object while moving at a high relative velocity. Next, we discuss planning and control strategies, empowering a quadrotor to perch on vertical surfaces using a downward-facing gripper. Then, we demonstrate that such interactions can be achieved using only onboard sensors by incorporating vision-based control and vision-based planning. In particular, we show how a quadrotor can use a single camera and an Inertial Measurement Unit (IMU) to perch on a cylinder. Finally, we generalize our approach to consider objects in motion, and we present relative pose estimation and planning, enabling tracking of a moving sphere using only an onboard camera and IMU

    Visual SLAM for Autonomous Navigation of MAVs

    Get PDF
    This thesis focuses on developing onboard visual simultaneous localization and mapping (SLAM) systems to enable autonomous navigation of micro aerial vehicles (MAVs), which is still a challenging topic considering the limited payload and computational capability that an MAV normally has. In MAV applications, the visual SLAM systems are required to be very efficient, especially when other visual tasks have to be done in parallel. Furthermore, robustness in pose tracking is highly desired in order to enable safe autonomous navigation of an MAV in three-dimensional (3D) space. These challenges motivate the work in this thesis in the following aspects. Firstly, the problem of visual pose estimation for MAVs using an artificial landmark is addressed. An artificial neural network (ANN) is used to robustly recognize this visual marker in cluttered environments. Then a computational projective-geometry method is implemented for relative pose computation based on the retrieved geometry information of the visual marker. The presented vision system can be used not only for pose control of MAVs, but also for providing accurate pose estimates to a monocular visual SLAM system serving as an automatic initialization module for both indoor and outdoor environments. Secondly, autonomous landing on an arbitrarily textured landing site during autonomous navigation of an MAV is achieved. By integrating an efficient local-feature-based object detection algorithm within a monocular visual SLAM system, the MAV is able to search for the landing site autonomously along a predefined path, and land on it once it has been found. Thus, the proposed monocular visual solution enables autonomous navigation of an MAV in parallel with landing site detection. This solution relaxes the assumption made in conventional vision-guided landing systems, which is that the landing site should be located inside the field of view (FOV) of the vision system before initiating the landing task. The third problem that is addressed in this thesis is multi-camera visual SLAM for robust pose tracking of MAVs. Due to the limited FOV of a single camera, pose tracking using monocular visual SLAM may easily fail when the MAV navigates in unknown environments. Previous work addresses this problem mainly by fusing information from other sensors, like an inertial measurement unit (IMU), to achieve robustness of the whole system, which does not improve the robustness of visual SLAM itself. This thesis investigates solutions for improving the pose tracking robustness of a visual SLAM system by utilizing multiple cameras. A mathematical analysis of how measurements from multiple cameras should be integrated in the optimization of visual SLAM is provided. The resulting theory allows those measurements to be used for both robust pose tracking and map updating of the visual SLAM system. Furthermore, such a multi-camera visual SLAM system is modified to be a robust constant-time visual odometry. By integrating this visual odometry with an efficient back-end which consists of loop-closure detection and pose-graph optimization processes, a near-constant time multi-camera visual SLAM system is achieved for autonomous navigation of MAVs in large-scale environments.Diese Arbeit konzentriert sich auf die Entwicklung von integrierten Systemen zur gleichzeitigen Lokalisierung und Kartierung (Simultaneous Localization and Mapping, SLAM) mit Hilfe visueller Sensoren, um die autonome Navigation von kleinen Luftfahrzeugen (Micro Aerial Vehicles, MAVs) zu ermöglichen. Dies ist noch immer ein anspruchsvolles Thema angesichts der meist begrenzten Nutzlast und Rechenleistung eines MAVs. Die dafür eingesetzten visuellen SLAM Systeme müssen sehr effizient zu sein, vor allem wenn parallel noch andere visuelle Aufgaben durchgeführt werden sollen. Darüber hinaus ist eine robuste Positionsschätzung sehr wichtig, um die sichere autonome Navigation des MAVs im dreidimensionalen (3D) Raum zu ermöglichen. Diese Herausforderungen motivieren die vorliegende Arbeit gemäß den folgenden Gesichtspunkten: Zuerst wird das Problem bearbeitet, die Pose eines MAVs mit Hilfe einer künstlichen Markierung visuell zu schätzen. Ein künstliches neuronales Netz wird verwendet, um diese visuelle Markierung auch in anspruchsvollen Umgebungen zuverlässig zu erkennen. Anschließend wird ein Verfahren aus der projektiven Geometrie eingesetzt, um die relative Pose basierend auf der gemessenen Geometrie der visuellen Markierung zu ermitteln. Das vorgestellte Bildverarbeitungssystem kann nicht nur zur Regelung der Pose des MAVs verwendet werden, sondern auch genaue Posenschätzungen zur automatischen Initialisierung eines monokularen visuellen SLAM-Systems im Innen- und Außenbereich liefern. Anschließend wird die autonome Landung eines MAVs auf einem beliebig texturierten Landeplatz während autonomer Navigation erreicht. Durch die Integration eines effizienten Objekterkennungsalgorithmus, basierend auf lokalen Bildmerkmalen in einem monokularen visuellen SLAM-System, ist das MAV in der Lage den Landeplatz autonom entlang einer vorgegebenen Strecke zu suchen, und auf ihm zu landen sobald er gefunden wurde. Die vorgestellte Lösung ermöglicht somit die autonome Navigation eines MAVs bei paralleler Landeplatzerkennung. Diese Lösung lockert die gängige Annahme in herkömmlichen Systemen zum kamerageführten Landen, dass der Landeplatz vor Beginn der Landung innerhalb des Sichtfelds des Bildverarbeitungssystems liegen muss. Das dritte in dieser Arbeit bearbeitete Problem ist visuelles SLAM mit mehreren Kameras zur robusten Posenschätzung für MAVs. Aufgrund des begrenzten Sichtfelds von einer einzigen Kamera kann die Posenschätzung von monokularem visuellem SLAM leicht fehlschlagen, wenn sich das MAV in einer unbekannten Umgebung bewegt. Frühere Arbeiten versutchen dieses Problem hauptsächlich durch die Fusionierung von Informationen anderer Sensoren, z.B. eines Inertialsensors (Inertial Measurement Unit, IMU) zu lösen um eine höhere Robustheit des Gesamtsystems zu erreichen, was die Robustheit des visuellen SLAM-Systems selbst nicht verbessert. Die vorliegende Arbeit untersucht Lösungen zur Verbesserung der Robustheit der Posenschätzung eines visuellen SLAM-Systems durch die Verwendung mehrerer Kameras. Wie Messungen von mehreren Kameras in die Optimierung für visuelles SLAM integriert werden können wird mathematisch analysiert. Die daraus resultierende Theorie erlaubt die Nutzung dieser Messungen sowohl zur robusten Posenschätzung als auch zur Aktualisierung der visuellen Karte. Ferner wird ein solches visuelles SLAM-System mit mehreren Kameras modifiziert, um in konstanter Laufzeit robuste visuelle Odometrie zu berechnen. Die Integration dieser visuellen Odometrie mit einem effizienten Back-End zur Erkennung von geschlossener Schleifen und der Optimierung des Posengraphen ermöglicht ein visuelles SLAM-System mit mehreren Kameras und fast konstanter Laufzeit zur autonomen Navigation von MAVs in großen Umgebungen

    Autonomous wireless self-charging for multi-rotor unmanned aerial vehicles

    Get PDF
    Rotary-wing unmanned aerial vehicles (UAVs) have the ability to operate in confined spaces and to hover over point of interest, but they have limited flight time and endurance. Conventional contact-based charging system for UAVs has been used, but it requires high landing accuracy for proper docking. Instead of the conventional system, autonomous wireless battery charging system for UAVs in outdoor conditions is proposed in this paper. UAVs can be wirelessly charged using the proposed charging system, regardless of yaw angle between UAVs and wireless charging pad, which can further reduce their control complexity for autonomous landing. The increased overall mission time eventually relaxes the limitations on payload and flight time. In this paper, a cost effective automatic recharging solution for UAVs in outdoor environments is proposed using wireless power transfer (WPT). This research proposes a global positioning system (GPS) and vision-based closed-loop target detection and a tracking system for precise landing of quadcopters in outdoor environments. The system uses the onboard camera to detect the shape, color and position of the defined target in image frame. Based on the offset of the target from the center of the image frame, control commands are generated to track and maintain the center position. Commercially available AR.Drone. was used to demonstrate the proposed concept which is equppied with bottom camera and GPS. Experiments and analyses showed good performance, and about 75% average WPT efficiency was achieved in this research

    Enabling technologies for precise aerial manufacturing with unmanned aerial vehicles

    Get PDF
    The construction industry is currently experiencing a revolution with automation techniques such as additive manufacturing and robot-enabled construction. Additive Manufacturing (AM) is a key technology that can o er productivity improvement in the construction industry by means of o -site prefabrication and on-site construction with automated systems. The key bene t is that building elements can be fabricated with less materials and higher design freedom compared to traditional manual methods. O -site prefabrication with AM has been investigated for some time already, but it has limitations in terms of logistical issues of components transportation and due to its lack of design exibility on-site. On-site construction with automated systems, such as static gantry systems and mobile ground robots performing AM tasks, can o er additional bene ts over o -site prefabrication, but it needs further research before it will become practical and economical. Ground-based automated construction systems also have the limitation that they cannot extend the construction envelope beyond their physical size. The solution of using aerial robots to liberate the process from the constrained construction envelope has been suggested, albeit with technological challenges including precision of operation, uncertainty in environmental interaction and energy e ciency. This thesis investigates methods of precise manufacturing with aerial robots. In particular, this work focuses on stabilisation mechanisms and origami-based structural elements that allow aerial robots to operate in challenging environments. An integrated aerial self-aligning delta manipulator has been utilised to increase the positioning accuracy of the aerial robots, and a Material Extrusion (ME) process has been developed for Aerial Additive Manufacturing (AAM). A 28-layer tower has been additively manufactured by aerial robots to demonstrate the feasibility of AAM. Rotorigami and a bioinspired landing mechanism demonstrate their abilities to overcome uncertainty in environmental interaction with impact protection capabilities and improved robustness for UAV. Design principles using tensile anchoring methods have been explored, enabling low-power operation and explores possibility of low-power aerial stabilisation. The results demonstrate that precise aerial manufacturing needs to consider not only just the robotic aspects, such as ight control algorithms and mechatronics, but also material behaviour and environmental interaction as factors for its success.Open Acces

    Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors

    Get PDF
    With the use of unmanned aerial vehicles (UAVs) becoming more widespread, a need for precise autonomous landings has arisen. In the maritime setting, precise autonomous landings will help to provide a safe way to recover UAVs deployed from a ship. On land, numerous applications have been proposed for UAV and unmanned ground vehicle (UGV) teams where autonomous docking is required so that the UGVs can either recover or service a UAV in the field. Current state of the art approaches to solving the problem rely on expensive inertial measurement sensors and RTK or differential GPS systems. However, such a solution is not practical for many UAV systems. A framework to perform precision landings on a moving target using low-cost sensors is proposed in this thesis. Vision from a downward facing camera is used to track a target on the landing platform and generate high quality relative pose estimates. The landing procedure consists of three stages. First, a rendezvous stage commands the quadrotor on a path to intercept the target. A target acquisition stage then ensures that the quadrotor is tracking the landing target. Finally, visual measurements of the relative pose to the landing target are used in the target tracking stage where control and estimation are performed in a body-planar frame, without the use of GPS or magnetometer measurements. A comprehensive overview of the control and estimation required to realize the three stage landing approach is presented. Critical parts of the landing framework were implemented on an AscTec Pelican testbed. The AprilTag visual fiducial system is chosen for use as the landing target. Implementation details to improve the AprilTag detection pipeline are presented. Simulated and experimen- tal results validate key portions of the landing framework. The novel relative estimation scheme is evaluated in an indoor positioning system. Tracking and landing on a moving target is demonstrated in an indoor environment. Outdoor tests also validate the target tracking performance in the presence of wind

    Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors

    Get PDF
    With the use of unmanned aerial vehicles (UAVs) becoming more widespread, a need for precise autonomous landings has arisen. In the maritime setting, precise autonomous landings will help to provide a safe way to recover UAVs deployed from a ship. On land, numerous applications have been proposed for UAV and unmanned ground vehicle (UGV) teams where autonomous docking is required so that the UGVs can either recover or service a UAV in the field. Current state of the art approaches to solving the problem rely on expensive inertial measurement sensors and RTK or differential GPS systems. However, such a solution is not practical for many UAV systems. A framework to perform precision landings on a moving target using low-cost sensors is proposed in this thesis. Vision from a downward facing camera is used to track a target on the landing platform and generate high quality relative pose estimates. The landing procedure consists of three stages. First, a rendezvous stage commands the quadrotor on a path to intercept the target. A target acquisition stage then ensures that the quadrotor is tracking the landing target. Finally, visual measurements of the relative pose to the landing target are used in the target tracking stage where control and estimation are performed in a body-planar frame, without the use of GPS or magnetometer measurements. A comprehensive overview of the control and estimation required to realize the three stage landing approach is presented. Critical parts of the landing framework were implemented on an AscTec Pelican testbed. The AprilTag visual fiducial system is chosen for use as the landing target. Implementation details to improve the AprilTag detection pipeline are presented. Simulated and experimen- tal results validate key portions of the landing framework. The novel relative estimation scheme is evaluated in an indoor positioning system. Tracking and landing on a moving target is demonstrated in an indoor environment. Outdoor tests also validate the target tracking performance in the presence of wind

    TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION

    Get PDF
    The objective of this dissertation is to develop and demonstrate autonomous ship-board landing with computer vision. The problem is hard primarily due to the unpredictable stochastic nature of deck motion. The work involves a fundamental understanding of how vision works, what are needed to implement it, how it interacts with aircraft controls, the necessary and sufficient hardware, and software, how it differs from human vision, its limits, and finally the avenues of growth in the context of aircraft landing. The ship-deck motion dataset is provided by the U.S. Navy. This data is analyzed to gain fundamental understanding and is then used to replicate stochastic deck motion in a laboratory setting on a six degrees of freedom motion platform, also called Stewart platform. The method uses a shaping filter derived from the dataset to excite the platform. An autonomous quadrotor UAV aircraft is designed and fabricated for experimental testing of vision-based landing methods. The entire structure, avionics architecture, and flight controls for the aircraft are completely developed in-house. This provides the flexibility and fundamental understanding needed for this research. A fiducial-based vision system is first designed for detection and tracking of ship-deck. This is then utilized to design a tracking controller with the best possible bandwidth to track the deck with minimum error. Systematic experiments are conducted with static, sinusoidal, and stochastic motions to quantify the tracking performance. A feature-based vision system is designed next. Simple experiments are used to quantitatively and qualitatively evaluate the superior robustness of feature-based vision under various degraded visual conditions. This includes: (1) partial occlusion, (2) illumination variation, (3) glare, and (4) water distortion. The weight and power penalty for using feature-based vision are also determined. The results show that it is possible to autonomously land on ship-deck using computer vision alone. An autonomous aircraft can be constructed with only an IMU and a Visual Odometry software running on stereo camera. The aircraft then only needs a monocular, global shutter, high frame rate camera as an extra sensor to detect ship-deck and estimate its relative position. The relative velocity however needs to be derived using Kalman filter on the position signal. For the filter, knowledge of disturbance/motion spectrum is not needed, but a white noise disturbance model is sufficient. For control, a minimum bandwidth of 0.15 Hz is required. For vision, a fiducial is not needed. A feature-rich landing area is all that is required. The limits of the algorithm are set by occlusion(80\% tolerable), illumination (20,000 lux-0.01 lux), angle of landing (up to 45 degrees), 2D nature of features, and motion blur. Future research should extend the capability to 3D features and use of event-based cameras. Feature-based vision is more versatile and human-like than fiducial-based, but at the cost of 20 times higher computing power which is increasingly possible with modern processors. The goal is not an imitation of nature but derive inspiration from it and overcome its limitations. The feature-based landing opens a window towards emulating the best of human training and cognition, without its burden of latency, fatigue, and divided attention

    Autonomous High-Precision Landing on a Unmanned Surface Vehicle

    Get PDF
    THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing (MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The majority of common UAV autonomous landing systems adopt an approach in which the UAV scans the landing zone for a predetermined pattern, establishes relative positions, and uses those positions to execute the landing. These techniques have some shortcomings, such as extensive processing being carried out by the UAV itself and requires a lot of computational power. The fact that most of these techniques only work while the UAV is already flying at a low altitude, since the pattern’s elements must be plainly visible to the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the landing zone and pointed up at the sky is the foundation of the methodology described throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics approaches can be used to isolate and analyse the distinctive motion patterns the UAV presents because the sky is a very static and homogeneous environment. Following realtime visual analysis, a terrestrial or maritime robotic system can transmit orders to the UAV. The ultimate result is a model-free technique, or one that is not based on established patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy enough to be used independently or in conjunction with more established techniques to create a system that is more robust. The object detection neural network approach was able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%, according to experimental simulation findings derived from a dataset comprising three different films. Also created was a high-level position relative control system that makes use of the idea of an approach zone to the helipad. Every potential three-dimensional point within the zone corresponds to a UAV velocity command with a certain orientation and magnitude. The control system worked flawlessly to conduct the UAV’s landing within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL). A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio UAV e requer muita potência computacional. O facto de a maioria destas técnicas só funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional. Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou marítimo pode transmitir ordens para o UAV. O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos, que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação experimental derivados de um conjunto de dados composto por três filmes diferentes. Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem do UAV dentro de 6 cm do alvo durante os testes num cenário simulado. Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato
    corecore