1,324 research outputs found

    A fly-robot interface to investigate the dynamics of closed-loop visuo-motor control in the blowfly

    No full text
    The blowfly Calliphora is one of the most sophisticated fliers in the animal kingdom. It displays a broad repertoire of visually guided behaviours that can readily be quantified, including gaze and flight stabilization reflexes, male chasing flights, collision avoidance and landing responses. The fly achieves such robust visuo-motor control tasks based on a comparatively simple nervous system that is highly accessible for electrophysiological recordings. The ability to investigate the fly’s performance at the behavioural and electrophysiology levels makes this animal an ideal model system to study closed-loop visual motor control. The aim of this thesis was to develop and characterize the dynamics of a fly-robot interface (FRI) while a fly performs a closed-loop visual stabilization task. A novel experimental setup involving a FRI was developed which allowed for simultaneous measurements of neural activity from the fly and the behavioural performance of the robot. In the setup, the neural activity of an identified visual interneuron, the H1 cell, was recorded and its action potentials were used to control the motion of a mobile robot that was free to rotate along its vertical axis. External visual perturbations were introduced into the closed-loop system through a rotating turn-table with the robot using the neural activity to counter-rotate and to minimize the observed visual motion. The closed-loop control delay of the FRI was 50 ms which is well within the range of visual response delays observed in fly behaviour. With the FRI, the closed-loop dynamics of a static-gain proportional controller were characterized. The results explain significant oscillations in the closed-loop responses as a possible consequence of a high controller gain which were also observed but never fully interpreted in previous behavioural studies. Varying the controller gain also offers competing control benefits to the fly, with different gains maximizing performance for different input frequency ranges and thus different behavioural tasks. Results with the proportional controller indicate the dependence of the FRI frequency response on the angular acceleration of visual motion. An adaptive controller designed to dynamically scale the feedback gain was found to increase the bandwidth of the frequency response when compared with the static-gain proportional controller. The image velocities observed under closed-loop conditions using the proportional and the adaptive controllers were correlated with the spiking activity of the H1-cell. A remarkable qualitative similarity was found between the response dynamics of the cell under closed-loop conditions with those obtained in previous open-loop experiments. Specifically, (i) the peak spike rate decreased when the mean image velocity was increased, (ii) the relationship between spike rate and image velocity was dependant on the standard deviation of the image velocities suggesting adaptive scaling of the cell’s signalling range, and (iii) the cell’s gain decreased linearly with increasing image accelerations. Despite the fact that several sensory modalities - including the motion vision pathway - process information in a non-linear fashion signal integration at stages one to two synapses away from the motor systems and the behavioural output itself have been shown to be linear. Quantifying the closed-loop dynamics of visuo-motor control at both the behavioural and neuronal level, may provide a starting point to discover the neural mechanisms underlying an appropriate combination of complementary non-linear processes which ultimately result in a linear performance of the overall system

    An Experimental Platform to Study the Closed-loop Performance of Brain-machine Interfaces

    Get PDF
    The non-stationary nature and variability of neuronal signals is a fundamental problem in brain-machine interfacing. We developed a brain-machine interface to assess the robustness of different control-laws applied to a closed-loop image stabilization task. Taking advantage of the well-characterized fly visuomotor pathway we record the electrical activity from an identified, motion-sensitive neuron, H1, to control the yaw rotation of a two-wheeled robot. The robot is equipped with 2 high-speed video cameras providing visual motion input to a fly placed in front of 2 CRT computer monitors. The activity of the H1 neuron indicates the direction and relative speed of the robot's rotation. The neural activity is filtered and fed back into the steering system of the robot by means of proportional and proportional/adaptive control. Our goal is to test and optimize the performance of various control laws under closed-loop conditions for a broader application also in other brain machine interfaces

    Modeling visual-based pitch, lift and speed control strategies in hoverflies

    Get PDF
    <div><p>To avoid crashing onto the floor, a free falling fly needs to trigger its wingbeats quickly and control the orientation of its thrust accurately and swiftly to stabilize its pitch and hence its speed. Behavioural data have suggested that the vertical optic flow produced by the fall and crossing the visual field plays a key role in this anti-crash response. Free fall behavior analyses have also suggested that flying insect may not rely on graviception to stabilize their flight. Based on these two assumptions, we have developed a model which accounts for hoverfliesÂŽ position and pitch orientation recorded in 3D with a fast stereo camera during experimental free falls. Our dynamic model shows that optic flow-based control combined with closed-loop control of the pitch suffice to stabilize the flight properly. In addition, our model sheds a new light on the visual-based feedback control of flyÂŽs pitch, lift and thrust. Since graviceptive cues are possibly not used by flying insects, the use of a vertical reference to control the pitch is discussed, based on the results obtained on a complete dynamic model of a virtual fly falling in a textured corridor. This model would provide a useful tool for understanding more clearly how insects may or not estimate their absolute attitude.</p></div

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Flying Drosophila stabilize their vision-based velocity controller by sensing wind with their antennae

    Get PDF
    Flies and other insects use vision to regulate their groundspeed in flight, enabling them to fly in varying wind conditions. Compared with mechanosensory modalities, however, vision requires a long processing delay (~100 ms) that might introduce instability if operated at high gain. Flies also sense air motion with their antennae, but how this is used in flight control is unknown. We manipulated the antennal function of fruit flies by ablating their aristae, forcing them to rely on vision alone to regulate groundspeed. Arista-ablated flies in flight exhibited significantly greater groundspeed variability than intact flies. We then subjected them to a series of controlled impulsive wind gusts delivered by an air piston and experimentally manipulated antennae and visual feedback. The results show that an antenna-mediated response alters wing motion to cause flies to accelerate in the same direction as the gust. This response opposes flying into a headwind, but flies regularly fly upwind. To resolve this discrepancy, we obtained a dynamic model of the fly’s velocity regulator by fitting parameters of candidate models to our experimental data. The model suggests that the groundspeed variability of arista-ablated flies is the result of unstable feedback oscillations caused by the delay and high gain of visual feedback. The antenna response drives active damping with a shorter delay (~20 ms) to stabilize this regulator, in exchange for increasing the effect of rapid wind disturbances. This provides insight into flies’ multimodal sensory feedback architecture and constitutes a previously unknown role for the antennae

    On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles

    Get PDF
    Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlĂ€ssige ZustandsschĂ€tzung und Algorithmen zur Vermeidung von Kollisionen. In dieser Dissertation prĂ€sentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen RĂ€umen. Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet. Diese Ausstattung genĂŒgt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue ZustandsschĂ€tzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezĂŒglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation. Ein KomplementĂ€rfilter berechnet die Höhe der Drohne, wĂ€hrend ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert. Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur VerfĂŒgung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen. Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschĂ€tzten Zustandes des Roboters. Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prĂ€diktiven Regelung. Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden. Die Plattform wurde experimentell sowohl in einer rĂ€umlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei TestflĂŒgen in offener Umgebung mit natĂŒrlichen Hindernissen wie z.B. BĂ€ume getestet. Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen. Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch. In dieser Arbeit prĂ€sentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen. Via Software-in-the-loop-Simulation konnte der ZustandsschĂ€tzer mit Hilfe simulierter Sensoren und zuvor aufgenommener DatensĂ€tze verbessert werden. Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfĂŒgbarer ROS-Simulator) mit zusĂ€tzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen. Ebenso können wir damit die EchtzeitfĂ€higkeit der Algorithmen direkt auf der Hardware validieren und verifizieren. Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators. Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von GewichtsbeschrĂ€nkungen nicht auf diese UnterstĂŒtzung zurĂŒckgreifen. Eine Fixierung der Kamera verursacht, wĂ€hrend der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeintrĂ€chtigt damit negativ die Manövrierbarkeit des Roboters. Viele wissenschaftliche Arbeiten beschĂ€ftigen sich mit der Lösung dieses Problems durch Feature-Tracking. Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU. Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann. Ebenso prĂ€sentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln. Unser Konzept erlaubt die Neigung der PropellerblĂ€tter unabhĂ€ngig von der Ausrichtung des Roboters mit Hilfe zweier zusĂ€tzlicher Aktuatoren. Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestĂ€tigen unsere Überlegungen und heben die Verbesserung der ManövrierfĂ€higkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup. Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware. Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors
    • 

    corecore