16 research outputs found

    How to control MAVs in cluttered environments?

    Get PDF

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    Autonomous flight at low altitude with vision-based collision avoidance and GPS-based path following

    Get PDF
    The ability to fly at low altitude while actively avoiding collisions with the terrain and other objects is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of static obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft is flying at approx. 9 m above the terrain on a circular path including two copses of trees requiring efficient collision avoidance actions

    The AirBurr: A Flying Robot That Can Exploit Collisions

    Get PDF
    Research made over the past decade shows the use of increasingly complex methods and heavy platforms to achieve autonomous flight in cluttered environments. However, efficient behaviors can be found in nature where limited sensing is used, such as in insects progressing toward a light at night. Interestingly, their success is based on their ability to recover from the numerous collisions happening along their imperfect flight path. The goal of the AirBurr project is to take inspiration from these insects and develop a new class of flying robots that can recover from collisions and even exploit them. Such robots are designed to be robust to crashes and can take-off again without human intervention. They navigate in a reactive way and, unlike conventional approaches, they don't need heavy modelling in order to fly autonomously. We believe that this new paradigm will bring flying robots out of the laboratory environment and allow them to tackle unstructured, cluttered environments. This paper aims at presenting the vision of the AirBurr project, as well as the latest results in the design of a platform capable of sustaining collisions and self-recovering after crashes

    Contact-based navigation for an autonomous flying robot

    Get PDF
    Autonomous navigation in obstacle-dense indoor environments is very challenging for flying robots due to the high risk of collisions, which may lead to mechanical damage of the platform and eventual failure of the mission. While conventional approaches in autonomous navigation favor obstacle avoidance strategies, recent work showed that collision-robust flying robots could hit obstacles without breaking and even self-recover after a crash to the ground. This approach is particularly interesting for autonomous navigation in complex environments where collisions are unavoidable, or for reducing the sensing and control complexity involved in obstacle avoidance. This paper aims at showing that collision-robust platforms can go a step further and exploit contacts with the environment to achieve useful navigation tasks based on the sense of touch. This approach is typically useful when weight restrictions prevent the use of heavier sensors, or as a low-level detection mechanism supplementing other sensing modalities. In this paper, a solution based on force and inertial sensors used to detect obstacles all around the robot is presented. Eight miniature force sensors, weighting 0.9g each, are integrated in the structure of a collision-robust flying platform without affecting its robustness. A proof-of-concept experiment demonstrates the use of contact sensing for exploring autonomously a room in 3D, showing significant advantages compared to a previous strategy. To our knowledge this is the first fully autonomous flying robot using touch sensors as only exteroceptive sensors

    Method for fabricating an artificial compound eye

    Get PDF
    A method for fabricating an imaging system, the method comprising providing a flexible substrate (200), a first layer (220) comprising a plurality of microlenses (232) and a second layer (240) comprising a plurality of image sensors (242). The method further comprises stacking the first and the second layer (220; 240) onto the flexible substrate (200) by attaching the plurality of image sensors (242) to the flexible substrate, such that each of the plurality of microlenses (232) and image sensors (242) are aligned to form a plurality of optical channels (300) , each optical channel comprising at least one microlens and at least one associated image sensor, and mechanically separating the optical channels (300) such that the separated optical channels remain attached to the flexible substrate (200) to form a mechanically flexible imaging system

    Indoor Navigation with a Swarm of Flying Robots

    Get PDF
    Swarms of flying robots are promising in many applications due to rapid terrain coverage. However, there are numerous challenges in realising autonomous operation in unknown indoor environments. A new autonomous flight methodology is presented using relative positioning sensors in reference to nearby static robots. The entirely decentralised approach relies solely on local sensing without requiring absolute positioning, environment maps, powerful computation or long-range communication. The swarm deploys as a robotic network facilitating navigation and goal directed flight. Initial validation tests with quadrotors demonstrated autonomous flight within a confined indoor environment, indicating that they could traverse a large network of static robots across expansive environments

    Autonomous flight at low altitude using light sensors and little computational power

    Get PDF
    The ability to fly at low altitude while actively avoiding collisions with the terrain and objects such as trees and buildings is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper describes how optiPilot can efficiently steer a flying platform during the critical phases of hand-launched take off and landing. It then shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft flies at approx. 10 m above ground in a circular path including two copses of trees requiring efficient collision avoidance actions

    Aerial Locomotion in Cluttered Environments

    Get PDF
    Many environments where robots are expected to operate are cluttered with objects, walls, debris, and different horizontal and vertical structures. In this chapter, we present four design features that allow small robots to rapidly and safely move in 3 dimensions through cluttered environments: a perceptual system capable of detecting obstacles in the robot’s surroundings, including the ground, with minimal computation, mass, and energy requirements; a flexible and protective framework capable of withstanding collisions and even using collisions to learn about the properties of the surroundings when light is not available; a mechanism for temporarily perching to vertical structures in order to monitor the environment or communicate with other robots before taking off again; and a self-deployment mechanism for getting in the air and perform repetitive jumps or glided flight. We conclude the chapter by suggesting future avenues for integration of multiple features within the same robotic platform
    corecore