10 research outputs found

    Behaviour Trees for Evolutionary Robotics: Reducing the Reality Gap

    No full text
    Evolutionary Robotics allows robots with limited sensors and processing to tackle complex tasks by means of sensory-motor coordination. In this paper we show the first application of the Behaviour Tree framework to a real robotic platform using the Evolutionary Robotics methodology. This framework is used to improve the intelligibility of the emergent robotic behaviour as compared to the traditional Neural Network formulation. As a result, the behaviour is easier to comprehend and manually adapt when crossing the reality gap from simulation to reality. This functionality is shown by performing real-world flight tests with the 20-gram DelFly Explorer flapping wing UAV equipped with a 4-gram onboard stereo vision system. The experiments show that the DelFly can fully autonomously search for and fly through a window with only its onboard sensors and processing. The success rate of the learnt behaviour in simulation 88% and the corresponding real-world performance is 54% after user adaptation. Although this leaves room for improvement, it is higher than the 46% success rate from a tuned user-defined controller.Control & SimulationControl & OperationsAerospace Engineerin

    Abstraction as a Tool to Bridge the Reality Gap in Evolutionary Robotics

    No full text
    Automatically optimizing robotic behavior to solve complex tasks has been one ofthe main, long-standing goals of Evolutionary Robotics (ER). When successful, thisapproach will likely fundamentally change the rate of development and deploymentof robots in everyday life. Performing this optimization on real robots can be riskyand time consuming. As a result, much of the work in ER is done using simulationswhich can operate many times faster than realtime. The only downside of this, isthat, due to the limited fidelity of the simulated environment, the optimized roboticbehavior is typically different when transferred to a robot in the real world. Thisdifference is referred to as the reality gap...Control & Simulatio

    Abstraction, Sensory-Motor Coordination, and the Reality Gap in Evolutionary Robotics

    No full text
    One of the major challenges of evolutionary robotics is to transfer robot controllers evolved in simulation to robots in the real world. In this article, we investigate abstraction of the sensory inputs and motor actions as a tool to tackle this problem. Abstraction in robots is simply the use of preprocessed sensory inputs and low-level closed-loop control systems that execute higher-level motor commands. To demonstrate the impact abstraction could have, we evolved two controllers with different levels of abstraction to solve a task of forming an asymmetric triangle with a homogeneous swarm of micro air vehicles. The results show that although both controllers can effectively complete the task in simulation, the controller with the lower level of abstraction is not effective on the real vehicle, due to the reality gap. The controller with the higher level of abstraction is, however, effective both in simulation and in reality, suggesting that abstraction can be a useful tool in making evolved behavior robust to the reality gap. Additionally, abstraction aided in reducing the computational complexity of the simulation environment, speeding up the optimization process. Preeminently, we show that the optimized behavior exploits the environment (in this case the identical behavior of the other robots) and performs input shaping to allow the vehicles to fly into and maintain the required formation, demonstrating clear sensory-motor coordination. This shows that the power of the genetic optimization to find complex correlations is not necessarily lost through abstraction as some have suggested.Control & Simulatio

    Evolution of robust high speed optical-flow-based landing for autonomous MAVs

    No full text
    Automatic optimization of robotic behavior has been the long-standing goal of Evolutionary Robotics. Allowing the problem at hand to be solved by automation often leads to novel approaches and new insights. A common problem encountered with this approach is that when this optimization occurs in a simulated environment, the optimized policies are subject to the reality gap when implemented in the real world. This often results in sub-optimal behavior, if it works at all. This paper investigates the automatic optimization of neurocontrollers to perform quick but safe landing maneuvers for a quadrotor micro air vehicle using the divergence of the optical flow field of a downward looking camera. The optimized policies showed that a piece-wise linear control scheme is more effective than the simple linear scheme commonly used, something not yet considered by human designers. Additionally, we show the utility in using abstraction on the input and output of the controller as a tool to improve the robustness of the optimized policies to the reality gap by testing our policies optimized in simulation on real world vehicles. We tested the neurocontrollers using two different methods to generate and process the visual input, one using a conventional CMOS camera and one a dynamic vision sensor, both of which perform significantly differently than the simulated sensor. The use of the abstracted input resulted in near seamless transfer to the real world with the controllers showing high robustness to a clear reality gap.Control & Simulatio

    Abstraction as a Mechanism to Cross the Reality Gap in Evolutionary Robotics

    No full text
    One of the major challenges of Evolutionary Robotics is to transfer robot controllers evolved in simulation to robots in the real world. In this article, we investigate abstraction on the sensory inputs and motor actions as a potential solution to this problem. Abstraction means that the robot uses preprocessed sensory inputs and closed loop low-level controllers that execute higher level motor commands. We apply abstraction to the task of forming an asymmetric triangle with a homogeneous swarm of MAVs. The results show that the evolved behavior is effective both in simulation and reality, suggesting that abstraction can be a useful tool in making evolved behavior robust to the reality gap. Furthermore, we study the evolved solution, showing that it exploits the environment (in this case the identical behavior of the other robots) and creates behavioral attractors resulting in the creation of the required formation. Hence, the analysis suggests that by using abstraction, sensory-motor coordination is not necessarily lost but rather shifted to a higher level of abstraction.Control & Simulatio

    Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception

    No full text
    The combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation. This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera. A novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning, respectively. After convergence, the neural architecture exhibits the main properties of biological visual motion systems, namely feature extraction and local and global motion perception. Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer. The proposed solution is validated using synthetic and real event sequences. Along with this paper, we provide the cuSNNlibrary, a framework that enables GPU-accelerated simulations of large-scale spiking neural networks. Source code and samples are available at https://github.com/tudelft/cuSNN.Control & Simulatio

    On-board communication-based relative localization for collision avoidance in Micro Air Vehicle teams

    No full text
    To avoid collisions, Micro Air Vehicles (MAVs) flying in teams require estimates of their relative locations, preferably with minimal mass and processing burden. We present a relative localization method where MAVs need only to communicate with each other using their wireless transceiver. The MAVs exchange on-board states (velocity, height, orientation) while the signal strength indicates range. Fusing these quantities provides a relative location estimate. We used this for collision avoidance in tight areas, testing with up to three AR.Drones in a (Formula presented.) area and with two miniature drones ((Formula presented.)) in a (Formula presented.) area. The MAVs could localize each other and fly several minutes without collisions. In our implementation, MAVs communicated using Bluetooth antennas. The results were robust to the high noise and disturbances in signal strength. They could improve further by using transceivers with more accurate signal strength readings.Control & Simulatio

    Behavior Trees for Evolutionary Robotics

    No full text
    Evolutionary Robotics allows robots with limited sensors and processing to tackle complex tasks by means of sensory-motor coordination. In this article we show the first application of the Behavior Tree framework on a real robotic platform using the evolutionary robotics methodology. This framework is used to improve the intelligibility of the emergent robotic behavior over that of the traditional neural network formulation. As a result, the behavior is easier to comprehend and manually adapt when crossing the reality gap from simulation to reality. This functionality is shown by performing real-world flight tests with the 20-g DelFly Explorer flapping wing micro air vehicle equipped with a 4-g onboard stereo vision system. The experiments show that the DelFly can fully autonomously search for and fly through a window with only its onboard sensors and processing. The success rate of the optimized behavior in simulation is 88%, and the corresponding real-world performance is 54% after user adaptation. Although this leaves room for improvement, it is higher than the 46% success rate from a tuned user-defined controller.Control & Simulatio

    First autonomous multi-room exploration with an insect-inspired flapping wing vehicle

    No full text
    One of the emerging tasks for Micro Air Vehicles (MAVs) is autonomous indoor navigation. While commonly employed platforms for such tasks are micro-quadrotors, insectinspired flapping wing MAVs can offer many advantages, suchas being inherently safe due to their low inertia, reciprocating wings bouncing of objects or potentially lower noise levels compared to rotary wings. Here, we present the first flapping wing MAV to perform an autonomous multi-room exploration task. Equipped with an on-board autopilot and a 4 g stereo vision system, the DelFly Explorer succeeded in combining the two most common tasks of an autonomous indoor exploration mission: room exploration and door passage. During the room exploration, the vehicle uses stereo-vision based droplet algorithm to avoid and navigate along the walls and obstacles.Simultaneously, it is running a newly developed monocular color based Snake-gate algorithm to locate doors. A successful detection triggers the heading-based door passage algorithm. In the real-world test, the vehicle could successfully navigate, multiple times in a row, between two rooms separated by acorridor, demonstrating the potential of flapping wing vehicles for autonomous exploration tasks.Control & Simulatio

    A Tailless Flapping Wing MAV Performing Monocular Visual Servoing Tasks

    No full text
    In the field of robotics, a major challenge is achieving high levels of autonomy with small vehicles that have limited mass and power budgets. The main motivation for designing such small vehicles is that, compared to their larger counterparts, they have the potential to be safer, and hence be available and work together in large numbers. One of the key components in micro robotics is efficient software design to optimally utilize the computing power available. This paper describes the computer vision and control algorithms used to achieve autonomous flight with the _30-gram tailless flapping wing robot, used to participate in the IMAV 2018 indoor micro air vehicle competition. Several tasks are discussed: line following, and circular gate detection and fly-through. The emphasis throughout this paper is on augmenting traditional techniques with the goal to make these methods work with limited computing power while obtaining robust behaviour.Control & Simulatio
    corecore