24 research outputs found

    A contribution to vision-based autonomous helicopter flight in urban environments

    Get PDF
    A navigation strategy that exploits the optic flow and inertial information to continuously avoid collisions with both lateral and frontal obstacles has been used to control a simulated helicopter flying autonomously in a textured urban environment. Experimental results demonstrate that the corresponding controller generates cautious behavior, whereby the helicopter tends to stay in the middle of narrow corridors, while its forward velocity is automatically reduced when the obstacle density increases. When confronted with a frontal obstacle, the controller is also able to generate a tight U-turn that ensures the UAV’s survival. The paper provides comparisons with related work, and discusses the applicability of the approach to real platforms

    A MAV that flies like an airplane and hovers like a helicopter

    Get PDF
    IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Monterey, CA, pp. 699-704, July 2005Near-Earth environments, such as forests, caves, tunnels, and urban structures make reconnaissance, surveillance and search-and-rescue missions difficult and dangerous to accomplish. Micro-Air-Vehicles (MAVs), equipped with wireless cameras, can assist in such missions by providing real-time situational awareness. This paper describes an additional flight modality enabling fixed-wing MAVs to supplement existing endurance superiority with hovering capabilities. This secondary flight mode can also be used to avoid imminent collisions by quickly transitioning from cruise to hover flight. A sensor suite which will allow for autonomous hovering by regulating the aircraft’s yaw, pitch and roll angles is also described

    Design of Cloud Based Robots using Big Data Analytics and Neuromorphic Computing

    Full text link
    Understanding the brain is perhaps one of the greatest challenges facing twenty-first century science. While a traditional computer excels in precision and unbiased logic, its abilities to interact socially lags behind those of biological neural systems. Recent technologies, such as neuromorphic engineering, cloud infrastructure, and big data analytics, have emerged that can narrow the gap between traditional robots and human intelligence. Neuromorphic robotics mimicking brain functions can contribute in developing intelligent machines capable of learning and making autonomous decisions. Cloud-based robotics take advantage of remote resources for parallel computation and sharing large amounts of information while benefiting from analysis of massive sensor data from robots. In this paper, we survey recent advances in neuromorphic computing, cloud-based robotics, and big data analytics and list the most important challenges faced by robot architects. We also propose a novel dual system architecture for robots where they have a brain centered cloud with access to big data analytics

    Neuromimetic Robots inspired by Insect Vision

    Get PDF
    International audienceEquipped with a less-than-one-milligram brain, insects fly autonomously in complex environments without resorting to any Radars, Ladars, Sonars or GPS. The knowledge gained during the last decades on insects' sensory-motor abilities and the neuronal substrates involved provides us with a rich source of inspiration for designing tomorrow's self-guided vehicles and micro-vehicles, which are to cope with unforeseen events on the ground, in the air, under water or in space. Insects have been in the business of sensory-motor integration for several 100 millions years and can therefore teach us useful tricks for designing agile autonomous vehicles at various scales. Constructing a "biorobot" first requires exactly formulating the signal processing principles at work in the animal. It gives us, in return, a unique opportunity of checking the soundness and robustness of those principles by bringing them face to face with the real physical world. Here we describe some of the visually-guided terrestrial and aerial robots we have developed on the basis of our biological findings. These robots (Robot Fly, SCANIA, FANIA, OSCAR, OCTAVE and LORA) all react to the optic flow (i.e., the angular speed of the retinal image). Optic flow is sensed onboard the robots by miniature vision sensors called Elementary Motion Detectors (EMDs). The principle of these electro-optical velocity sensors was derived from optical/electrophysiological studies where we recorded the responses of single neurons to optical microstimulation of single photoreceptor cells in a model visual system: the fly's compound eye. Optic flow based sensors rely solely on contrast provided by reflected (or scattered) sunlight from any kind of celestial bodies in a given spectral range. These nonemissive, powerlean sensors offer potential applications to manned or unmanned aircraft. Applications can also be envisaged to spacecraft, from robotic landers and rovers to asteroid explorers or space station dockers, with interesting prospects as regards reduction in weight and consumption

    Optic-flow-based steering and altitude control for ultra-light indoor aircraft

    Get PDF
    Our goal is to demonstrate the ability of bio-inspired techniques to solve the problem of piloting an autonomous 40-grams aircraft within textured indoor environments. Because of severe weight and energy constraints, inspiration has been taken from the fly and only visual and vestibular-like sensors are employed. This paper describes the models and algorithms that will be used for altitude control and frontal obstacle avoidance, mainly relying on optic flow. For experimental convenience, both mechanisms have first been implemented and tested on a small wheeled robot featuring the same hardware as the targeted aircraft

    Autonomous flight at low altitude with vision-based collision avoidance and GPS-based path following

    Get PDF
    The ability to fly at low altitude while actively avoiding collisions with the terrain and other objects is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of static obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft is flying at approx. 9 m above the terrain on a circular path including two copses of trees requiring efficient collision avoidance actions

    Novel Roaming and Stationary Tethered Aerial Robots for Continuous Mobile Missions in Nuclear Power Plants

    Get PDF
    AbstractIn this paper, new tethered aerial robots including roaming tethered aerial robots (RTARs) for radioactive material sampling and stationary tethered aerial robots (STARs) for environment monitoring are proposed to meet extremely-long-endurance missions of nuclear power plants. The flight of the proposed tethered aerial robots may last for a few days or even a few months as long as the tethered cable provides continuous power. A high voltage AC or DC power system was newly adopted to reduce the mass of the tethered cable. The RTAR uses a tethered cable spooled from the aerial robot and an aerial tension control system. The aerial tension control system provides the appropriate tension to the tethered cable, which is accordingly laid down on the ground as the RTAR roams. The STAR includes a tethered cable spooled from the ground and a ground tension control system, which enables the STAR to reach high altitudes. Prototypes of the RTAR and STAR were designed and successfully demonstrated in outdoor environments, where the load power, power type, operating frequency, and flight attitude of the RTAR and STAR were: 180 W, AC 100 kHz, and 20 m; and 300 W, AC or DC 100 kHz, and 80 m, respectively

    Honeybees' Speed Depends on Dorsal as Well as Lateral, Ventral and Frontal Optic Flows

    Get PDF
    Flying insects use the optic flow to navigate safely in unfamiliar environments, especially by adjusting their speed and their clearance from surrounding objects. It has not yet been established, however, which specific parts of the optical flow field insects use to control their speed. With a view to answering this question, freely flying honeybees were trained to fly along a specially designed tunnel including two successive tapering parts: the first part was tapered in the vertical plane and the second one, in the horizontal plane. The honeybees were found to adjust their speed on the basis of the optic flow they perceived not only in the lateral and ventral parts of their visual field, but also in the dorsal part. More specifically, the honeybees' speed varied monotonically, depending on the minimum cross-section of the tunnel, regardless of whether the narrowing occurred in the horizontal or vertical plane. The honeybees' speed decreased or increased whenever the minimum cross-section decreased or increased. In other words, the larger sum of the two opposite optic flows in the horizontal and vertical planes was kept practically constant thanks to the speed control performed by the honeybees upon encountering a narrowing of the tunnel. The previously described ALIS (“AutopiLot using an Insect-based vision System”) model nicely matches the present behavioral findings. The ALIS model is based on a feedback control scheme that explains how honeybees may keep their speed proportional to the minimum local cross-section of a tunnel, based solely on optic flow processing, without any need for speedometers or rangefinders. The present behavioral findings suggest how flying insects may succeed in adjusting their speed in their complex foraging environments, while at the same time adjusting their distance not only from lateral and ventral objects but also from those located in their dorsal visual field

    Autonomous flight at low altitude using light sensors and little computational power

    Get PDF
    The ability to fly at low altitude while actively avoiding collisions with the terrain and objects such as trees and buildings is a great challenge for small unmanned aircraft. This paper builds on top of a control strategy called optiPilot whereby a series of optic-flow detectors pointed at divergent viewing directions around the aircraft main axis are linearly combined into roll and pitch commands using two sets of weights. This control strategy already proved successful at controlling flight and avoiding collisions in reactive navigation experiments. This paper describes how optiPilot can efficiently steer a flying platform during the critical phases of hand-launched take off and landing. It then shows how optiPilot can be coupled with a GPS in order to provide goal-directed, nap-of-the-earth flight control in presence of obstacles. Two fully autonomous flights of 25 minutes each are described where a 400-gram unmanned aircraft flies at approx. 10 m above ground in a circular path including two copses of trees requiring efficient collision avoidance actions
    corecore