510 research outputs found

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Taking Inspiration from Flying Insects to Navigate inside Buildings

    Get PDF
    These days, flying insects are seen as genuinely agile micro air vehicles fitted with smart sensors and also parsimonious in their use of brain resources. They are able to visually navigate in unpredictable and GPS-denied environments. Understanding how such tiny animals work would help engineers to figure out different issues relating to drone miniaturization and navigation inside buildings. To turn a drone of ~1 kg into a robot, miniaturized conventional avionics can be employed; however, this results in a loss of their flight autonomy. On the other hand, to turn a drone of a mass between ~1 g (or less) and ~500 g into a robot requires an innovative approach taking inspiration from flying insects both with regard to their flapping wing propulsion system and their sensory system based mainly on motion vision in order to avoid obstacles in three dimensions or to navigate on the basis of visual cues. This chapter will provide a snapshot of the current state of the art in the field of bioinspired optic flow sensors and optic flow-based direct feedback loops applied to micro air vehicles flying inside buildings

    Distributed Control for Collective Behaviour in Micro-unmanned Aerial Vehicles

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.The work presented herein focuses on the design of distributed autonomous controllers for collective behaviour of Micro-unmanned Aerial Vehicles (MAVs). Two alternative approaches to this topic are introduced: one based upon the Evolutionary Robotics (ER) paradigm, the other one upon flocking principles. Three computer simulators have been developed in order to carry out the required experiments, all of them having their focus on the modelling of fixed-wing aircraft flight dynamics. The employment of fixed-wing aircraft rather than the omni-directional robots typically employed in collective robotics significantly increases the complexity of the challenges that an autonomous controller has to face. This is mostly due to the strict motion constraints associated with fixed-wing platforms, that require a high degree of accuracy by the controller. Concerning the ER approach, the experimental setups elaborated have resulted in controllers that have been evolved in simulation with the following capabilities: (1) navigation across unknown environments, (2) obstacle avoidance, (3) tracking of a moving target, and (4) execution of cooperative and coordinated behaviours based on implicit communication strategies. The design methodology based upon flocking principles has involved tests on computer simulations and subsequent experimentation on real-world robotic platforms. A customised implementation of Reynolds’ flocking algorithm has been developed and successfully validated through flight tests performed with the swinglet MAV. It has been notably demonstrated how the Evolutionary Robotics approach could be successfully extended to the domain of fixed-wing aerial robotics, which has never received a great deal of attention in the past. The investigations performed have also shown that complex and real physics-based computer simulators are not a compulsory requirement when approaching the domain of aerial robotics, as long as proper autopilot systems (taking care of the ”reality gap” issue) are used on the real robots.EOARD (European Office of Aerospace Research & Development), euCognitio

    Cooperation of unmanned systems for agricultural applications: A theoretical framework

    Get PDF
    Agriculture 4.0 comprises a set of technologies that combines sensors, information systems, enhanced machinery, and informed management with the objective of optimising production by accounting for variabilities and uncertainties within agricultural systems. Autonomous ground and aerial vehicles can lead to favourable improvements in management by performing in-field tasks in a time-effective way. In particular, greater benefits can be achieved by allowing cooperation and collaborative action among unmanned vehicles, both aerial and ground, to perform in-field operations in precise and time-effective ways. In this work, the preliminary and crucial step of analysing and understanding the technical and methodological challenges concerning the main problems involved is performed. An overview of the agricultural scenarios that can benefit from using collaborative machines and the corresponding cooperative schemes typically adopted in this framework are presented. A collection of kinematic and dynamic models for different categories of autonomous aerial and ground vehicles is provided, which represents a crucial step in understanding the vehicles behaviour when full autonomy is desired. Last, a collection of the state-of-the-art technologies for the autonomous guidance of drones is provided, summarising their peculiar characteristics, and highlighting their advantages and shortcomings with a specific focus on the Agriculture 4.0 framework. A companion paper reports the application of some of these techniques in a complete case study in sloped vineyards, applying the proposed multi-phase collaborative scheme introduced here

    Intelligent flight control systems

    Get PDF
    The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Generating Road Network Graph with Vision-Based Unmanned Vehicle

    Get PDF
    With the advancement of technology and its cheapness, robotic vehicles have gained a large number of applications. The spread of their use is growing also because they are getting smaller, lighter and easier to build. In this paper we present a simple and effective way to map a road network with the help of a driverless vehicle. Our approach consists of only three parts: vision-segmentation, angle variation and travelled distance. A video camera attached to a Lego® NXT Mindstorm vehicle guides it by image segmentation using Matlab® Image processing toolbox, along a road network, in which is represented by black tape over a white floor. The algorithm makes the vehicle travel all over the road memorizing main coordinates to identify all crossroads by keeping track of the travelled distance and the current angle. The crossroads and road’s end are the nodes of the graph. After several simulations have been performed, the modelling proved to be successful in that small scale approach. Consequently, there are good chances that driverless cars and UAVs also make use of the strategies to map route networks accordingly. The algorithm presented in this paper is useful when there is no localization signal such as GPS, for example, navigation on water, tunnels, inside buildings, among others.Faculty Sponsor: Francisco de Assis Zampiroll

    Using learning from demonstration to enable automated flight control comparable with experienced human pilots

    Get PDF
    Modern autopilots fall under the domain of Control Theory which utilizes Proportional Integral Derivative (PID) controllers that can provide relatively simple autonomous control of an aircraft such as maintaining a certain trajectory. However, PID controllers cannot cope with uncertainties due to their non-adaptive nature. In addition, modern autopilots of airliners contributed to several air catastrophes due to their robustness issues. Therefore, the aviation industry is seeking solutions that would enhance safety. A potential solution to achieve this is to develop intelligent autopilots that can learn how to pilot aircraft in a manner comparable with experienced human pilots. This work proposes the Intelligent Autopilot System (IAS) which provides a comprehensive level of autonomy and intelligent control to the aviation industry. The IAS learns piloting skills by observing experienced teachers while they provide demonstrations in simulation. A robust Learning from Demonstration approach is proposed which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured. The datasets are then used by Artificial Neural Networks (ANNs) to generate control models automatically. The control models imitate the skills of the experienced pilots when performing the different piloting tasks while handling flight uncertainties such as severe weather conditions and emergency situations. Experiments show that the IAS performs learned skills and tasks with high accuracy even after being presented with limited examples which are suitable for the proposed approach that relies on many single-hidden-layer ANNs instead of one or few large deep ANNs which produce a black-box that cannot be explained to the aviation regulators. The results demonstrate that the IAS is capable of imitating low-level sub-cognitive skills such as rapid and continuous stabilization attempts in stormy weather conditions, and high-level strategic skills such as the sequence of sub-tasks necessary to takeoff, land, and handle emergencies

    Autonomous UAV Battery Swapping

    Get PDF
    One of the main hindrances of unmanned aerial vehicle (UAV) technology are power constraints. One way to alleviate some power constraints would be for two UAVs to exchange batteries while both are in flight. Autonomous mid-air battery swapping will expand the scope of UAV technology by allowing for indefinite flight times and longer missions. A single board computer will control each UAV’s flight software to respond to inputs to align with each other mid-flight. When the two UAVs have joined, mechanical components will exchange a depleted battery on the worker UAV for a freshly charged battery that belongs to the battery supply UAV. After the exchange, the drones will then detach themselves from each other, and the worker UAV will resume its mission while the battery supply UAV returns back to the ground control station
    • …
    corecore