26 research outputs found
Insect Bio-inspired Neural Network Provides New Evidence on How Simple Feature Detectors Can Enable Complex Visual Generalization and Stimulus Location Invariance in the Miniature Brain of Honeybees
This work was supported by a Human Frontier Science Program Grant RGP0022/2014 to LC and Queen Mary University of London Scholarship to MR. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript
Honeybee visual cognition: a miniature brainâs simple solutions to complex problems
PhDIn recent decades we have seen a string of remarkable discoveries detailing the
impressive cognitive abilities of bees (social learning, concept learning and even
counting). But should these discoveries be regarded as spectacular because bees manage
to achieve human-like computations of visual image analysis and reasoning? Here I
offer a radically different explanation. Using theoretical bee brain models and detailed
flight analysis of bees undergoing behavioural experiments I counter the widespread
view that complex visual recognition and classification requires animals to not only
store representations of images, but also perform advanced computations on them.
Using a bottom-up approach I created theoretical models inspired by the known
anatomical structures and neuronal responses within the bee brain and assessed how
much neural complexity is required to accomplish behaviourally relevant tasks. Model
simulations of just eight large-field orientation-sensitive neurons from the optic ganglia
and a single layer of simple neuronal connectivity within the mushroom bodies
(learning centres) generated performances remarkably similar to the empirical result of
real bees during both discrimination and generalisation orientation pattern experiments.
My models also hypothesised that complex âabove and belowâ conceptual learning,
often used to exemplify how âcleverâ bees are, could instead be accomplished by very
simple inspection of the target patterns. Analysis of the beesâ flight paths during
training on this task found bees utilised an even simpler mechanism than anticipated,
demonstrating how the insects use unique and elegant solutions to deal with complex
visual challenges. The true impact of my research is therefore not merely showing a
model that can solve a particular set of generalisation experiments, but in providing a
fundamental shift in how we should perceive visual recognition problems. Across animals, equally simple neuronal architectures may well underlie the cognitive
affordances that we currently assume to be required for more complex conceptual and
discrimination tasks
Does spatial navigation have a blind-spot? Visiocentrism is not enough to explain the navigational behavior comprehensively
Preparation of the manuscript was supported by the research grant 2015/19/B/HS1/03310 âMechanisms of geometric cognitionâ funded by National Science Centre, Poland.In this paper, we argue that the issues described arise not because of the lack of theoretical
inspiration, but rather due to an insufficient understanding of the subtleties of insect behavior. In
our view, implementation of the insectsâ models of navigation in the explanation of the vertebratesâ
spatial behavior omits some important aspects, i.e., multimodal integration. Thus, we want to ask
again the initial question posed by Wystrach and Graham (2012b) pointing out that significant
progress in insectsâ research, which suggests that we might have had underestimated insectsâ
cognitive abilities (Loukola et al., 2017; Peng and Chittka, 2017). Those results demonstrated
insectsâ capacity to obtain abstract information from multimodal input during complex tasks.
Movement through a real environment provides a variety of cues, not only visual ones, thus in
the following article we argue that multimodal integration is crucial to navigation.National Science Centre, Polan
Bio-inspired Neural Networks for Angular Velocity Estimation in Visually Guided Flights
Executing delicate flight maneuvers using visual information is a huge challenge
for future robotic vision systems. As a source of inspiration, insects are quite apt at
navigating in woods and landing on surfaces which require delicate visual perception
and flight control. The exquisite sensitivity of insects for image motion speed, as revealed recently, is coming from a class of specific neurons called descending neurons.
Some of the descending neurons have demonstrated angular velocity selectivity as the
image motion speed varies in retina. Build a quantitative angular velocity detection
model is the first step for not only further understanding of the biological visual system, but also providing robust and economic solutions of visual motion perception
for an artificial visual system. This thesis aims to explore biological image processing
methods for motion speed detection in visually guided flights. The major contributions
are summarized as follows.
We have presented an angular velocity decoding model (AVDM), which estimates
the visual motion speed combining both textural and temporal information from input signals. The model consists of three parts: elementary motion detection circuits,
wide-field texture estimation pathway and angular velocity decoding layer. The model
estimates the angular velocity very well with improved spatial frequency independence
compared to the state-of-the-art angular velocity detecting models, when firstly tested by moving sinusoidal gratings. This spatial independence is vital to account for
the honeybeeâs flight behaviors. We have also investigated the spatial and temporal
resolutions of honeybees to get a bio-plausible parameter setting for explaining these
behaviors.
To investigate whether the model can account for observations of tunnel centering
behaviors of honeybees, the model has been implemented in a virtual bee simulated by
the game engine Unity. The simulation results of a series of experiments show that the
agent can adjust its position to fly through patterned tunnels by balancing the angular
velocities estimated on both eyes under several circumstances. All tunnel stimulations
reproduce similar behaviors of real bees, which indicate that our model does provide
a possible explanation for estimating the image velocity and can be used for MAVâs
flight course regulation in tunnels. Whatâs more, to further verify the robustness of the
model, the visually guided terrain following simulations have been carried out with a
closed-loop control scheme to restore a preset angular velocity during the flight. The
simulation results of successfully flying over the undulating terrain verify the feasibility and robustness of the AVDM performing in various application scenarios, which
shows its potential in applications of micro aerial vehicleâs terrain following.
In addition, we have also applied the AVDM in grazing landing using only visual
information. A LGMD neuron is also introduced to avoid collision and to trigger the
hover phase, which ensures the safety of landing. By applying honeybeeâs landing
strategy of keeping constant angular velocity, we have designed a close-loop control
scheme with an adaptive gain to control landing dynamic using AVDM response as
input. A series of controlled trails have been designed in Unity platform to demonstrate
the effectiveness of the proposed model and control scheme for visual landing under
various conditions. The proposed model could be implemented into real small robots
to investigate the robustness in real landing scenarios in near future
An Insect-Inspired Target Tracking Mechanism for Autonomous Vehicles
Target tracking is a complicated task from an engineering perspective, especially where targets are small and seen against complex natural environments. Due to the high demand for robust target tracking algorithms a great deal of research has focused on this area. However, most engineering solutions developed for this purpose are often unreliable in real world conditions or too computationally expensive to be used in real-time applications. While engineering methods try to solve the problem of target detection and tracking by using high resolution input images, fast processors, with typically computationally expensive methods, a quick glance at nature provides evidence that practical real world solutions for target tracking exist. Many animals track targets for predation, territorial or mating purposes and with millions of years of evolution behind them, it seems reasonable to assume that these solutions are highly efficient. For instance, despite their low resolution compound eyes and tiny brains, many flying insects have evolved superb abilities to track targets in visual clutter even in the presence of other distracting stimuli, such as swarms of prey and conspecifics. The accessibility of the dragonfly for stable electrophysiological recordings makes this insect an ideal and tractable model system for investigating the neuronal correlates for complex tasks such as target pursuit. Studies on dragonflies identified and characterized a set of neurons likely to mediate target detection and pursuit referred to as âsmall target motion detectorâ (STMD) neurons. These neurons are selective for tiny targets, are velocity-tuned, contrast-sensitive and respond robustly to targets even against the motion of background. These neurons have shown several high-order properties which can contribute to the dragonflyâs ability to robustly pursue prey with over a 97% success rate. These include the recent electrophysiological observations of response âfacilitationâ (a slow build-up of response to targets that move on long, continuous trajectories) and âselective attentionâ, a competitive mechanism that selects one target from alternatives. In this thesis, I adopted a bio-inspired approach to develop a solution for the problem of target tracking and pursuit. Directly inspired by recent physiological breakthroughs in understanding the insect brain, I developed a closed-loop target tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. First, I tested this model in virtual world simulations using MATLAB/Simulink. The results of these simulations show robust performance of this insect-inspired model, achieving high prey capture success even within complex background clutter, low contrast and high relative speed of pursued prey. Additionally, these results show that inclusion of facilitation not only substantially improves success for even short-duration pursuits, it also enhances the ability to âattendâ to one target in the presence of distracters. This inspect-inspired system has a relatively simple image processing strategy compared to state-of-the-art trackers developed recently for computer vision applications. Traditional machine vision approaches incorporate elaborations to handle challenges and non-idealities in the natural environments such as local flicker and illumination changes, and non-smooth and non-linear target trajectories. Therefore, the question arises as whether this insect inspired tracker can match their performance when given similar challenges? I investigated this question by testing both the efficacy and efficiency of this insect-inspired model in open-loop, using a widely-used set of videos recorded under natural conditions. I directly compared the performance of this model with several state-of-the-art engineering algorithms using the same hardware, software environment and stimuli. This insect-inspired model exhibits robust performance in tracking small moving targets even in very challenging natural scenarios, outperforming the best of the engineered approaches. Furthermore, it operates more efficiently compared to the other approaches, in some cases dramatically so. Computer vision literature traditionally test target tracking algorithms only in open-loop. However, one of the main purposes for developing these algorithms is implementation in real-time robotic applications. Therefore, it is still unclear how these algorithms might perform in closed-loop real-world applications where inclusion of sensors and actuators on a physical robot results in additional latency which can affect the stability of the feedback process. Additionally, studies show that animals interact with the target by changing eye or body movements, which then modulate the visual inputs underlying the detection and selection task (via closed-loop feedback). This active vision system may be a key to exploiting visual information by the simple insect brain for complex tasks such as target tracking. Therefore, I implemented this insect-inspired model along with insect active vision in a robotic platform. I tested this robotic implementation both in indoor and outdoor environments against different challenges which exist in real-world conditions such as vibration, illumination variation, and distracting stimuli. The experimental results show that the robotic implementation is capable of handling these challenges and robustly pursuing a target even in highly challenging scenarios.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201
Bumblebees Use Sequential Scanning of Countable Items in Visual Patterns to Solve Numerosity Tasks.
Most research in comparative cognition focuses on measuring if animals manage certain tasks; fewer studies explore how animals might solve them. We investigated bumblebees' scanning strategies in a numerosity task, distinguishing patterns with two items from four and one from three, and subsequently transferring numerical information to novel numbers, shapes, and colors. Video analyses of flight paths indicate that bees do not determine the number of items by using a rapid assessment of number (as mammals do in "subitizing"); instead, they rely on sequential enumeration even when items are presented simultaneously and in small quantities. This process, equivalent to the motor tagging ("pointing") found for large number tasks in some primates, results in longer scanning times for patterns containing larger numbers of items. Bees used a highly accurate working memory, remembering which items have already been scanned, resulting in fewer than 1% of re-inspections of items before making a decision. Our results indicate that the small brain of bees, with less parallel processing capacity than mammals, might constrain them to use sequential pattern evaluation even for low quantities
The neural mechanisms underlying bumblebee visual learning and memory
PhDLearning and memory offer animals the ability to modify their behavior in response to
changes in the environment. A main target of neuroscience is to understand mechanisms
underlying learning, memory formation and memory maintenance. Honeybees and
bumblebees exhibit remarkable learning and memory abilities with a small brain, which
makes them popular models for studying the neurobiological basis of learning and memory.
However, almost all of previous molecular level research on beesâ learning and memory
has focused on the olfactory domain. Our understanding of the neurobiological basis
underlying bee visual learning and memory is limited. In this thesis, I explore how synaptic
organization and gene expression change in the context of visual learning.
In Chapter 2, I investigate the effects of color learning and experience on synaptic
connectivity and find that color learning result in an increase of the density of synaptic
complexes (microglomeruli; MG), while exposure to color information may play a large
role in experience-dependent changes in microglomerular density increase. In addition,
microglomerular surface area increases as a result of long-term memory formation. In
Chapter 3, I investigate the correlations between synaptic organizations and individual
performance and the results show that bees with a higher density of microglomeruli in
visual association areas of the brain are predisposed to faster learning and better long-term
memory during a visual discrimination task. In Chapter 4, I explore the genes involved in
visual learning and memory by transcriptome sequencing and I show the unique gene
expression patterns at different times after visual learning.
In summary, my findings shed light on the relationship between synaptic connections and
visual learning and memory in bees at the group and individual level and show new
candidate genes involved in visual learning, which provide new avenue for future study.China Scholarship Council and Queen Mary University of London