202 research outputs found
Biolocomotion Detection in Videos
Animals locomote for various reasons: to search for food, to find suitable habitat, to pursue prey, to escape from predators, or to seek a mate. The grand scale of biodiversity contributes to the great locomotory design and mode diversity. In this dissertation, the locomotion of general biological species is referred to as biolocomotion. The goal of this dissertation is to develop a computational approach to detect biolocomotion in any unprocessed video.
The ways biological entities locomote through an environment are extremely diverse. Various creatures make use of legs, wings, fins, and other means to move through the world. Significantly, the motion exhibited by the body parts to navigate through an environment can be modelled by a combination of an overall positional advance with an overlaid asymmetric oscillatory pattern, a distinctive signature that tends to be absent in non-biological objects in locomotion. In this dissertation, this key trait of positional advance with asymmetric oscillation along with differences in an object's common motion (extrinsic motion) and localized motion of its parts (intrinsic motion) is exploited to detect biolocomotion. In particular, a computational algorithm is developed to measure the presence of these traits in tracked objects to determine if they correspond to a biological entity in locomotion. An alternative algorithm, based on generic handcrafted features combined with learning is assembled out of components from allied areas of investigation, also is presented as a basis of comparison to the main proposed algorithm.
A novel biolocomotion dataset encompassing a wide range of moving biological and non-biological objects in natural settings is provided. Additionally, biolocomotion annotations to an extant camouflage animals dataset also is provided. Quantitative results indicate that the proposed algorithm considerably outperforms the alternative approach, supporting the hypothesis that biolocomotion can be detected reliably based on its distinct signature of positional advance with asymmetric oscillation and extrinsic/intrinsic motion dissimilarity
Collision Avoidance for UAVs Using Optic Flow Measurement with Line of Sight Rate Equalization and Looming
A series of simplified scenarios is investigated whereby an optical flow balancing guidance law is used to avoid obstacles by steering an air vehicle between fixed objects/obstacles. These obstacles are registered as specific points that can be representative of features in a scene. The obstacles appear in the field of view of a single forward looking camera. First a 2-D analysis is presented where the rate of the line of sight from the vehicle to each of the obstacles to be avoided is measured. The analysis proceeds by initially using no field of view (FOV) limitations, then applying FOV restrictions, and adding features or obstacles in the scene. These analyses show that using a guidance law that equalizes the line of sight rates with no FOV limitations, actually results in the vehicle being steered into one of the objects for all initial conditions. The research next develops an obstacle avoidance strategy based on equilibrating the optic flow generated by the obstacles and presents an analysis that leads to a different conclusion in which balancing the optic flows does avoid the obstacles. The paper then describes a set of guidance methods that with real FOV limitations create a favorable result. Finally, the looming of an object in the camera\u27s FOV can be measured and used for synthesizing a collision avoidance guidance law. For the simple 2-D case, looming is quantified as an increase in LOS between two features on a wall in front of the air vehicle. The 2-D guidance law for equalizing the optic flow and looming detection is then extended into the 3-D case. Then a set of 3-D scenarios are further explored using a decoupled two channel approach. In addition, a comparison of two image segmentation techniques that are used to find optic flow vectors is presented
Recommended from our members
Exploring behavioral circuits with holographic optogenetics and network imaging
Included works This thesis contains three previously published works: Semmelhack, Donovan, et al.; eLife 2014 Temizer, Donovan, et al.; Current Biology 2015 Thiele, Donovan, Baier; Neuron 2014 And one full manuscript, soon to be in the second round of review: Dal Maschio*, Donovan*, et al. *(Equal contributions) The work presented in these manuscripts is equivalent to a standard thesis
Recommended from our members
3D motion : encoding and perception
The visual system supports perception and inferences about events in a dynamic, three-dimensional (3D) world. While remarkable progress has been made in the study of visual information processing, the existing paradigms for examining visual perception and its relation to neural activity often fail to generalize to perception in the real world which has complex dynamics and 3D spatial structure. This thesis focuses on the case of 3D motion, developing dynamic tasks for studying visual perception and constructing a neural coding framework to relate neural activity to perception in a 3D environment.
First, I introduce target-tracking as a psychophysical method and develop an analysis framework based on state space models and the Kalman filter. I demonstrate that target-tracking in conjunction with a Kalman filter analysis framework produce estimates of visual sensitivity that are comparable to those obtained with a traditional forced-choice task and a signal detection theory analysis. Next, I use the target-tracking paradigm in a series of experiments examining 3D motion perception, specifically comparing the perception of frontoparallel motion with the perception of motion-through-depth. I find that continuous tracking of motion-through-depth is selectively impaired due to the relatively small retinal projections resulting from motion-through-depth and the slower processing of binocular disparities.
The thesis then turns the neural representation of 3D motion and how that underlies perception. First I introduce a theoretical framework that extends the standard neural coding approach, incorporating the environment-to-retina transformation. Neural coding typically treats the visuals stimulus as a direct proxy for the pattern of stimulation that falls on the retina. Incorporating the environment-to-retina transformation results in a neural representation fundamentally shaped by the projective geometry of the world onto the retina. This model explains substantial anomalies in existing neurophysiological recordings in primate visual cortical neurons during presentations of 3D motion and in psychophysical studies of human perception. In a series of psychophysical experiments, I systematically examine the predictions of the model for human perception by observing how perceptual performance changes as a function of viewing distance and eccentricity. Performance in these experiments suggests a reliance on a neural representation similar to the one described by the model.
Taken together, the experimental and theoretical findings reported here advance the understanding of the neural representation and perception of the dynamic 3D world, and adds to the behavioral tools available to vision scientists.Neuroscienc
Mapping nonlinear receptive field structure in primate retina at single cone resolution
The function of a neural circuit is shaped by the computations performed by its interneurons, which in many cases are not easily accessible to experimental investigation. Here, we elucidate the transformation of visual signals flowing from the input to the output of the primate retina, using a combination of large-scale multi-electrode recordings from an identified ganglion cell type, visual stimulation targeted at individual cone photoreceptors, and a hierarchical computational model. The results reveal nonlinear subunits in the circuity of OFF midget ganglion cells, which subserve high-resolution vision. The model explains light responses to a variety of stimuli more accurately than a linear model, including stimuli targeted to cones within and across subunits. The recovered model components are consistent with known anatomical organization of midget bipolar interneurons. These results reveal the spatial structure of linear and nonlinear encoding, at the resolution of single cells and at the scale of complete circuits
A bio-inspired computational model for motion detection
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)Last years have witnessed a considerable interest in research dedicated to show that
solutions to challenges in autonomous robot navigation can be found by taking inspiration
from biology.
Despite their small size and relatively simple nervous systems, insects have evolved
vision systems able to perform the computations required for a safe navigation in dynamic
and unstructured environments, by using simple, elegant and computationally
efficient strategies. Thus, invertebrate neuroscience provides engineers with many
neural circuit diagrams that can potentially be used to solve complicated engineering
control problems.
One major and yet unsolved problem encountered by visually guided robotic platforms
is collision avoidance in complex, dynamic and inconstant light environments.
In this dissertation, the main aim is to draw inspiration from recent and future findings
on insect’s collision avoidance in dynamic environments and on visual strategies
of light adaptation applied by diurnal insects, to develop a computationally efficient
model for robotic control, able to work even in adverse light conditions.
We first present a comparative analysis of three leading collision avoidance models
based on a neural pathway responsible for signing collisions, the Lobula Giant Movement
Detector/Desceding Contralateral Movement Detector (LGMD/DCMD), found
in the locust visual system. Models are described, simulated and results are compared
with biological data from literature.
Due to the lack of information related to the way this collision detection neuron
deals with dynamic environments, new visual stimuli were developed. Locusts Lo-
custa Migratoria were stimulated with computer-generated discs that traveled along
a combination of non-colliding and colliding trajectories, placed over a static and two
distinct moving backgrounds, while simultaneously recording the DCMD activity extracellularly.
Based on these results, an innovative model was developed. This model was tested
in specially designed computer simulations, replicating the same visual conditions used
for the biological recordings. The proposed model is shown to be sufficient to give rise to experimentally observed neural insect responses.
Using a different approach, and based on recent findings, we present a direct approach
to estimate potential collisions through a sequential computation of the image’s
power spectra. This approach has been implemented in a real robotic platform, showing
that distant dependent variations on image statistics are likely to be functional
significant.
Maintaining the collision detection performance at lower light levels is not a trivial
task. Nevertheless, some insect visual systems have developed several strategies to
help them to optimize visual performance over a wide range of light intensities. In
this dissertation we address the neural adaptation mechanisms responsible to improve
light capture on a day active insect, the bumblebee Bombus Terrestris. Behavioral
analyses enabled us to investigate and infer about the spatial and temporal neural
summation extent applied by those insects to improve image reliability at the different
light levels.
As future work, the collision avoidance model may be coupled with a bio-inspired
light adaptation mechanism and used for robotic autonomous navigation.Os últimos anos têm testemunhado um aumento progressivo da investigação dedicada
a demonstrar que possíveis soluções, para problemas existentes na navegação autónoma
de robôs, podem ser encontradas buscando inspiração na biologia.
Apesar do reduzido tamanho e da simplicidade do seu sistema nervoso, os insectos
possuem sistemas de visão capazes de realizar os cálculos necessários para uma navegação
segura em ambientes dinâmicos e não estruturados, por meio de estratégias simples,
elegantes e computacionalmente eficientes. Assim, a área da neurociência que se debruça
sobre o estudo dos invertebrados fornece, à area da engenharia, uma vasta gama de
diagramas de circuitos neurais, que podem ser usados como base para a resolução de
problemas complexos.
Um atual e notável problema, cujas plataformas robóticas baseadas em sistemas
de visão estão sujeitas, é o problema de deteção de colisões em ambientes complexos,
dinâmicos e de intensidade luminosa variável.
Assim, o objetivo principal do trabalho aqui apresentado é o de procurar inspiração
em recentes e futuras descobertas relacionadas com os mecanismos que possibilitam
a deteção de colisões em ambientes dinâmicos, bem como nas estratégias visuais de
adaptação à luz, aplicadas por insectos diurnos.
Numa primeira abordagem é feita uma análise comparativa dos três principais modelos,
propostos na literatura, de deteção de colisões, que têm por base o funcionamento
dos neurónios Lobular Gigante Detector de Movimento/ Detector de Movimento Descendente
Contralateral (LGMD / DCMD), que fazem parte do sistema visual do gafanhoto.
Os modelos são descritos, simulados e os resultados são comparados com os dados biológicos
existentes, descritos na literatura.
Devido à falta de informação relacionada com a forma como estes neurónios detectores
de colisões lidam com ambientes dinâmicos, foram desenvolvidos novos estímulos visuais.
A estimulação de gafanhotos Locusta Migratoria foi realizada usando-se estímulos
controlados, gerados por computador, efectuando diferentes combinações de trajectórias
de não-colisão e colisão, colocados sobre um fundo estático e dois fundos dinâmicos. extracelulares do neurónio DCMD.
Com base nos resultados obtidos foi possível desenvolver um modelo inovador.
Este foi testado sob estímulos visuais desenvolvidos computacionalmente, recriando as
mesmas condições visuais usadas aquando dos registos neuronais biológicos. O modelo
proposto mostrou ser capaz de reproduzir os resultados neuronais dos gafanhotos,
experimentalmente obtidos.
Usando uma abordagem diferente, e com base em descobertas recentes, apresentamos
uma metodologia mais direta, que possibilita estimar possíveis colisões através de
cálculos sequenciais dos espetros de potência das imagens captadas. Esta abordagem
foi implementada numa plataforma robótica real, mostrando que, variações estatísticas
nas imagens captadas, são susceptíveis de serem funcionalmente significativas.
Manter o desempenho da deteção de colisões, em níveis de luz reduzida, não é uma
tarefa trivial. No entanto, alguns sistemas visuais de insectos desenvolveram estratégias
de forma a optimizar o seu desempenho visual numa larga gama de intensidades
luminosas. Nesta dissertação, os mecanismos de adaptação neuronais, responsáveis
pela melhoraria de captação de luz num inseto diurno, a abelha Bombus Terrestris,
serviram como uma base de estudo. Adaptando análises comportamentais, foi-nos
permitido investigar e inferir acerca da extensão dos somatórios neuronais, espaciais e
temporais, aplicados por estes insetos, por forma a melhorar a qualidade das imagens
captadas a diferentes níveis de luz.
Como trabalho futuro, o modelo de deteção de colisões deverá ser acoplado com
um mecanismo de adaptação à luz, sendo ambos bio-inspirados, e que possam ser
utilizados na navegação robótica autónoma
Motor patterns during active electrosensory acquisition
Hofmann V, Geurten B, Sanguinetti-Scheck JI, Gomez-Senna L, Engelmann J. Motor patterns during active electrosensory acquisition. Frontiers in Behavioral Neuroscience. 2014;8:186.Motor patterns displayed during active electrosensory acquisition of information seem to be an essential part of a sensory strategy by which weakly electric fish actively generate and shape sensory flow. These active sensing strategies are expected to adaptively optimize ongoing behavior with respect to either motor efficiency or sensory information gained. The tight link between the motor domain and sensory perception in active electrolocation make weakly electric fish like Gnathonemus petersii an ideal system for studying sensory-motor interactions in the form of active sensing strategies. Analyzing the movements and electric signals of solitary fish during unrestrained exploration of objects in the dark, we here present the first formal quantification of motor patterns used by fish during electrolocation. Based on a cluster analysis of the kinematic values we categorized the basic units of motion. These were then analyzed for their associative grouping to identify and extract short coherent chains of behavior. This enabled the description of sensory behavior on different levels of complexity: from single movements, over short behaviors to more complex behavioral sequences during which the kinematics alter between different behaviors. We present detailed data for three classified patterns and provide evidence that these can be considered as motor components of active sensing strategies. In accordance with the idea of active sensing strategies, we found categorical motor patterns to be modified by the sensory context. In addition these motor patterns were linked with changes in the temporal sampling in form of differing electric organ discharge frequencies and differing spatial distributions. The ability to detect such strategies quantitatively will allow future research to investigate the impact of such behaviors on sensing
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Proceedings of the 3rd International Mobile Brain/Body Imaging Conference : Berlin, July 12th to July 14th 2018
The 3rd International Mobile Brain/Body Imaging (MoBI) conference in Berlin 2018 brought together researchers from various disciplines interested in understanding the human brain in its natural environment and during active behavior. MoBI is a new imaging modality, employing mobile brain imaging methods like the electroencephalogram (EEG) or near infrared spectroscopy (NIRS) synchronized to motion capture and other data streams to investigate brain activity while participants actively move in and interact with their environment. Mobile Brain / Body Imaging allows to investigate brain dynamics accompanying more natural cognitive and affective processes as it allows the human to interact with the environment without restriction regarding physical movement. Overcoming the movement restrictions of established imaging modalities like functional magnetic resonance tomography (MRI), MoBI can provide new insights into the human brain function in mobile participants. This imaging approach will lead to new insights into the brain functions underlying active behavior and the impact of behavior on brain dynamics and vice versa, it can be used for the development of more robust human-machine interfaces as well as state assessment in mobile humans.DFG, GR2627/10-1, 3rd International MoBI Conference 201
- …