1,115 research outputs found

    A bio-inspired computational model for motion detection

    Get PDF
    Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)Last years have witnessed a considerable interest in research dedicated to show that solutions to challenges in autonomous robot navigation can be found by taking inspiration from biology. Despite their small size and relatively simple nervous systems, insects have evolved vision systems able to perform the computations required for a safe navigation in dynamic and unstructured environments, by using simple, elegant and computationally efficient strategies. Thus, invertebrate neuroscience provides engineers with many neural circuit diagrams that can potentially be used to solve complicated engineering control problems. One major and yet unsolved problem encountered by visually guided robotic platforms is collision avoidance in complex, dynamic and inconstant light environments. In this dissertation, the main aim is to draw inspiration from recent and future findings on insect’s collision avoidance in dynamic environments and on visual strategies of light adaptation applied by diurnal insects, to develop a computationally efficient model for robotic control, able to work even in adverse light conditions. We first present a comparative analysis of three leading collision avoidance models based on a neural pathway responsible for signing collisions, the Lobula Giant Movement Detector/Desceding Contralateral Movement Detector (LGMD/DCMD), found in the locust visual system. Models are described, simulated and results are compared with biological data from literature. Due to the lack of information related to the way this collision detection neuron deals with dynamic environments, new visual stimuli were developed. Locusts Lo- custa Migratoria were stimulated with computer-generated discs that traveled along a combination of non-colliding and colliding trajectories, placed over a static and two distinct moving backgrounds, while simultaneously recording the DCMD activity extracellularly. Based on these results, an innovative model was developed. This model was tested in specially designed computer simulations, replicating the same visual conditions used for the biological recordings. The proposed model is shown to be sufficient to give rise to experimentally observed neural insect responses. Using a different approach, and based on recent findings, we present a direct approach to estimate potential collisions through a sequential computation of the image’s power spectra. This approach has been implemented in a real robotic platform, showing that distant dependent variations on image statistics are likely to be functional significant. Maintaining the collision detection performance at lower light levels is not a trivial task. Nevertheless, some insect visual systems have developed several strategies to help them to optimize visual performance over a wide range of light intensities. In this dissertation we address the neural adaptation mechanisms responsible to improve light capture on a day active insect, the bumblebee Bombus Terrestris. Behavioral analyses enabled us to investigate and infer about the spatial and temporal neural summation extent applied by those insects to improve image reliability at the different light levels. As future work, the collision avoidance model may be coupled with a bio-inspired light adaptation mechanism and used for robotic autonomous navigation.Os últimos anos têm testemunhado um aumento progressivo da investigação dedicada a demonstrar que possíveis soluções, para problemas existentes na navegação autónoma de robôs, podem ser encontradas buscando inspiração na biologia. Apesar do reduzido tamanho e da simplicidade do seu sistema nervoso, os insectos possuem sistemas de visão capazes de realizar os cálculos necessários para uma navegação segura em ambientes dinâmicos e não estruturados, por meio de estratégias simples, elegantes e computacionalmente eficientes. Assim, a área da neurociência que se debruça sobre o estudo dos invertebrados fornece, à area da engenharia, uma vasta gama de diagramas de circuitos neurais, que podem ser usados como base para a resolução de problemas complexos. Um atual e notável problema, cujas plataformas robóticas baseadas em sistemas de visão estão sujeitas, é o problema de deteção de colisões em ambientes complexos, dinâmicos e de intensidade luminosa variável. Assim, o objetivo principal do trabalho aqui apresentado é o de procurar inspiração em recentes e futuras descobertas relacionadas com os mecanismos que possibilitam a deteção de colisões em ambientes dinâmicos, bem como nas estratégias visuais de adaptação à luz, aplicadas por insectos diurnos. Numa primeira abordagem é feita uma análise comparativa dos três principais modelos, propostos na literatura, de deteção de colisões, que têm por base o funcionamento dos neurónios Lobular Gigante Detector de Movimento/ Detector de Movimento Descendente Contralateral (LGMD / DCMD), que fazem parte do sistema visual do gafanhoto. Os modelos são descritos, simulados e os resultados são comparados com os dados biológicos existentes, descritos na literatura. Devido à falta de informação relacionada com a forma como estes neurónios detectores de colisões lidam com ambientes dinâmicos, foram desenvolvidos novos estímulos visuais. A estimulação de gafanhotos Locusta Migratoria foi realizada usando-se estímulos controlados, gerados por computador, efectuando diferentes combinações de trajectórias de não-colisão e colisão, colocados sobre um fundo estático e dois fundos dinâmicos. extracelulares do neurónio DCMD. Com base nos resultados obtidos foi possível desenvolver um modelo inovador. Este foi testado sob estímulos visuais desenvolvidos computacionalmente, recriando as mesmas condições visuais usadas aquando dos registos neuronais biológicos. O modelo proposto mostrou ser capaz de reproduzir os resultados neuronais dos gafanhotos, experimentalmente obtidos. Usando uma abordagem diferente, e com base em descobertas recentes, apresentamos uma metodologia mais direta, que possibilita estimar possíveis colisões através de cálculos sequenciais dos espetros de potência das imagens captadas. Esta abordagem foi implementada numa plataforma robótica real, mostrando que, variações estatísticas nas imagens captadas, são susceptíveis de serem funcionalmente significativas. Manter o desempenho da deteção de colisões, em níveis de luz reduzida, não é uma tarefa trivial. No entanto, alguns sistemas visuais de insectos desenvolveram estratégias de forma a optimizar o seu desempenho visual numa larga gama de intensidades luminosas. Nesta dissertação, os mecanismos de adaptação neuronais, responsáveis pela melhoraria de captação de luz num inseto diurno, a abelha Bombus Terrestris, serviram como uma base de estudo. Adaptando análises comportamentais, foi-nos permitido investigar e inferir acerca da extensão dos somatórios neuronais, espaciais e temporais, aplicados por estes insetos, por forma a melhorar a qualidade das imagens captadas a diferentes níveis de luz. Como trabalho futuro, o modelo de deteção de colisões deverá ser acoplado com um mecanismo de adaptação à luz, sendo ambos bio-inspirados, e que possam ser utilizados na navegação robótica autónoma

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉ¹ with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc

    Ultra high frequency (UHF) radio-frequency identification (RFID) for robot perception and mobile manipulation

    Get PDF
    Personal robots with autonomy, mobility, and manipulation capabilities have the potential to dramatically improve quality of life for various user populations, such as older adults and individuals with motor impairments. Unfortunately, unstructured environments present many challenges that hinder robot deployment in ordinary homes. This thesis seeks to address some of these challenges through a new robotic sensing modality that leverages a small amount of environmental augmentation in the form of Ultra High Frequency (UHF) Radio-Frequency Identification (RFID) tags. Previous research has demonstrated the utility of infrastructure tags (affixed to walls) for robot localization; in this thesis, we specifically focus on tagging objects. Owing to their low-cost and passive (battery-free) operation, users can apply UHF RFID tags to hundreds of objects throughout their homes. The tags provide two valuable properties for robots: a unique identifier and receive signal strength indicator (RSSI, the strength of a tag's response). This thesis explores robot behaviors and radio frequency perception techniques using robot-mounted UHF RFID readers that enable a robot to efficiently discover, locate, and interact with UHF RFID tags applied to objects and people of interest. The behaviors and algorithms explicitly rely on the robot's mobility and manipulation capabilities to provide multiple opportunistic views of the complex electromagnetic landscape inside a home environment. The electromagnetic properties of RFID tags change when applied to common household objects. Objects can have varied material properties, can be placed in diverse orientations, and be relocated to completely new environments. We present a new class of optimization-based techniques for RFID sensing that are robust to the variation in tag performance caused by these complexities. We discuss a hybrid global-local search algorithm where a robot employing long-range directional antennas searches for tagged objects by maximizing expected RSSI measurements; that is, the robot attempts to position itself (1) near a desired tagged object and (2) oriented towards it. The robot first performs a sparse, global RFID search to locate a pose in the neighborhood of the tagged object, followed by a series of local search behaviors (bearing estimation and RFID servoing) to refine the robot's state within the local basin of attraction. We report on RFID search experiments performed in Georgia Tech's Aware Home (a real home). Our optimization-based approach yields superior performance compared to state of the art tag localization algorithms, does not require RF sensor models, is easy to implement, and generalizes to other short-range RFID sensor systems embedded in a robot's end effector. We demonstrate proof of concept applications, such as medication delivery and multi-sensor fusion, using these techniques. Through our experimental results, we show that UHF RFID is a complementary sensing modality that can assist robots in unstructured human environments.PhDCommittee Chair: Kemp, Charles C.; Committee Member: Abowd, Gregory; Committee Member: Howard, Ayanna; Committee Member: Ingram, Mary Ann; Committee Member: Reynolds, Matt; Committee Member: Tentzeris, Emmanoui

    Insect-Inspired Visual Perception for Flight Control and Collision Avoidance

    Get PDF
    Flying robots are increasingly used for tasks such as aerial mapping, fast exploration, video footage and monitoring of buildings. Autonomous flight at low altitude in cluttered and unknown environments is an active research topic because it poses challenging perception and control problems. Traditional methods for collision-free navigation at low altitude require heavy resources to deal with the complexity of natural environments, something that limits the autonomy and the payload of flying robots. Flying insects, however, are able to navigate safely and efficiently using vision as the main sensory modality. Flying insects rely on low resolution, high refresh rate, and wide-angle compound eyes to extract angular image motion and move in unstructured environments. These strategies result in systems that are physically and computationally lighter than those often found in high-definition stereovision. Taking inspiration from insects offers great potential for building small flying robots capable of navigating in cluttered environments using lightweight vision sensors. In this thesis, we investigate insect perception of visual motion and insect vision based flight control in cluttered environments. We use the knowledge gained through the modelling of neural circuits and behavioural experiments to develop flying robots with insect-inspired control strategies for goal-oriented navigation in complex environments. We start by exploring insect perception of visual motion. We present a study that reconciles an apparent contradiction in the literature for insect visual control: current models developed to explain insect flight behaviour rely on the measurement of optic flow, however the most prominent neural model for visual motion extraction (the Elementary Motion Detector, or EMD) does not measure optic flow. We propose a model for unbiased optic flow estimation that relies on comparing the output of multiple EMDs pointed in varying viewing directions. Our model is of interest of both engineers and biologists because it is computationally more efficient than other optic flow estimation algorithms, and because it represents a biologically plausible model for optic flow extraction in insect neural systems. We then focus on insect flight control strategies in the presence of obstacles. By recording the trajectories of bumblebees (Bombus terrestris), and by comparing them to simulated flights, we show that bumblebees rely primarily on the frontal part of their field of view, and that they pool optic flow in two different manners for the control of flight speed and of lateral position. For the control of lateral position, our results suggest that bumblebees selectively react to the portions of the visual field where optic flow is the highest, which correspond to the closest obstacles. Finally, we tackle goal-oriented navigation with a novel algorithm that combines aspects of insect perception and flight control presented in this thesis -- like the detection of fastest moving objects in the frontal visual field -- with other aspects of insect flight known from the literature such as saccadic flight pattern. Through simulations, we demonstrate autonomous navigation in forest-like environments using only local optic flow information and assuming knowledge about the direction to the navigation goal

    Visual flight control in the honeybee

    Get PDF

    Space-Time Continuous Models of Swarm Robotic Systems: Supporting Global-to-Local Programming

    Get PDF
    A generic model in as far as possible mathematical closed-form was developed that predicts the behavior of large self-organizing robot groups (robot swarms) based on their control algorithm. In addition, an extensive subsumption of the relatively young and distinctive interdisciplinary research field of swarm robotics is emphasized. The connection to many related fields is highlighted and the concepts and methods borrowed from these fields are described shortly

    Annotated Bibliography: Anticipation

    Get PDF

    A neurobiological and computational analysis of target discrimination in visual clutter by the insect visual system.

    Get PDF
    Some insects have the capability to detect and track small moving objects, often against cluttered moving backgrounds. Determining how this task is performed is an intriguing challenge, both from a physiological and computational perspective. Previous research has characterized higher-order neurons within the fly brain known as 'small target motion detectors‘ (STMD) that respond selectively to targets, even within complex moving surrounds. Interestingly, these cells still respond robustly when the velocity of the target is matched to the velocity of the background (i.e. with no relative motion cues). We performed intracellular recordings from intermediate-order neurons in the fly visual system (the medulla). These full-wave rectifying, transient cells (RTC) reveal independent adaptation to luminance changes of opposite signs (suggesting separate 'on‘ and 'off‘ channels) and fast adaptive temporal mechanisms (as seen in some previously described cell types). We show, via electrophysiological experiments, that the RTC is temporally responsive to rapidly changing stimuli and is well suited to serving an important function in a proposed target-detecting pathway. To model this target discrimination, we use high dynamic range (HDR) natural images to represent 'real-world‘ luminance values that serve as inputs to a biomimetic representation of photoreceptor processing. Adaptive spatiotemporal high-pass filtering (1st-order interneurons) shapes the transient 'edge-like‘ responses, useful for feature discrimination. Following this, a model for the RTC implements a nonlinear facilitation between the rapidly adapting, and independent polarity contrast channels, each with centre-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We show that our RTC-based target detection model is well matched to properties described for the higher-order STMD neurons, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter‘ to successfully detect many targets from the background. The model produces robust target discrimination across a biologically plausible range of target sizes and a range of velocities. We show that the model for small target motion detection is highly correlated to the velocity of the stimulus but not other background statistics, such as local brightness or local contrast, which normally influence target detection tasks. From an engineering perspective, we examine model elaborations for improved target discrimination via inhibitory interactions from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion. Finally, we investigate the physiological relevance of this target discrimination model. We show that via very subtle image manipulation of the visual stimulus, our model accurately predicts dramatic changes in observed electrophysiological responses from STMD neurons.Thesis (Ph.D.) - University of Adelaide, School of Molecular and Biomedical Science, 200

    An Insect-Inspired Target Tracking Mechanism for Autonomous Vehicles

    Get PDF
    Target tracking is a complicated task from an engineering perspective, especially where targets are small and seen against complex natural environments. Due to the high demand for robust target tracking algorithms a great deal of research has focused on this area. However, most engineering solutions developed for this purpose are often unreliable in real world conditions or too computationally expensive to be used in real-time applications. While engineering methods try to solve the problem of target detection and tracking by using high resolution input images, fast processors, with typically computationally expensive methods, a quick glance at nature provides evidence that practical real world solutions for target tracking exist. Many animals track targets for predation, territorial or mating purposes and with millions of years of evolution behind them, it seems reasonable to assume that these solutions are highly efficient. For instance, despite their low resolution compound eyes and tiny brains, many flying insects have evolved superb abilities to track targets in visual clutter even in the presence of other distracting stimuli, such as swarms of prey and conspecifics. The accessibility of the dragonfly for stable electrophysiological recordings makes this insect an ideal and tractable model system for investigating the neuronal correlates for complex tasks such as target pursuit. Studies on dragonflies identified and characterized a set of neurons likely to mediate target detection and pursuit referred to as ‘small target motion detector’ (STMD) neurons. These neurons are selective for tiny targets, are velocity-tuned, contrast-sensitive and respond robustly to targets even against the motion of background. These neurons have shown several high-order properties which can contribute to the dragonfly’s ability to robustly pursue prey with over a 97% success rate. These include the recent electrophysiological observations of response ‘facilitation’ (a slow build-up of response to targets that move on long, continuous trajectories) and ‘selective attention’, a competitive mechanism that selects one target from alternatives. In this thesis, I adopted a bio-inspired approach to develop a solution for the problem of target tracking and pursuit. Directly inspired by recent physiological breakthroughs in understanding the insect brain, I developed a closed-loop target tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. First, I tested this model in virtual world simulations using MATLAB/Simulink. The results of these simulations show robust performance of this insect-inspired model, achieving high prey capture success even within complex background clutter, low contrast and high relative speed of pursued prey. Additionally, these results show that inclusion of facilitation not only substantially improves success for even short-duration pursuits, it also enhances the ability to ‘attend’ to one target in the presence of distracters. This inspect-inspired system has a relatively simple image processing strategy compared to state-of-the-art trackers developed recently for computer vision applications. Traditional machine vision approaches incorporate elaborations to handle challenges and non-idealities in the natural environments such as local flicker and illumination changes, and non-smooth and non-linear target trajectories. Therefore, the question arises as whether this insect inspired tracker can match their performance when given similar challenges? I investigated this question by testing both the efficacy and efficiency of this insect-inspired model in open-loop, using a widely-used set of videos recorded under natural conditions. I directly compared the performance of this model with several state-of-the-art engineering algorithms using the same hardware, software environment and stimuli. This insect-inspired model exhibits robust performance in tracking small moving targets even in very challenging natural scenarios, outperforming the best of the engineered approaches. Furthermore, it operates more efficiently compared to the other approaches, in some cases dramatically so. Computer vision literature traditionally test target tracking algorithms only in open-loop. However, one of the main purposes for developing these algorithms is implementation in real-time robotic applications. Therefore, it is still unclear how these algorithms might perform in closed-loop real-world applications where inclusion of sensors and actuators on a physical robot results in additional latency which can affect the stability of the feedback process. Additionally, studies show that animals interact with the target by changing eye or body movements, which then modulate the visual inputs underlying the detection and selection task (via closed-loop feedback). This active vision system may be a key to exploiting visual information by the simple insect brain for complex tasks such as target tracking. Therefore, I implemented this insect-inspired model along with insect active vision in a robotic platform. I tested this robotic implementation both in indoor and outdoor environments against different challenges which exist in real-world conditions such as vibration, illumination variation, and distracting stimuli. The experimental results show that the robotic implementation is capable of handling these challenges and robustly pursuing a target even in highly challenging scenarios.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201
    • …
    corecore