1,763 research outputs found

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    Insect-vision inspired collision warning vision processor for automobiles

    Get PDF
    Vision is expected to play important roles for car safety enhancement. Imaging systems can be used to enlarging the vision field of the driver. For instance capturing and displaying views of hidden areas around the car which the driver can analyze for safer decision-making. Vision systems go a step further. They can autonomously analyze the visual information, identify dangerous situations and prompt the delivery of warning signals. For instance in case of road lane departure, if an overtaking car is in the blind spot, if an object is approaching within collision course, etc. Processing capabilities are also needed for applications viewing the car interior such as >intelligent airbag systems> that base deployment decisions on passenger features. On-line processing of visual information for car safety involves multiple sensors and views, huge amount of data per view and large frame rates. The associated computational load may be prohibitive for conventional processing architectures. Dedicated systems with embedded local processing capabilities may be needed to confront the challenges. This paper describes a dedicated sensory-processing architecture for collision warning which is inspired by insect vision. Particularly, the paper relies on the exploitation of the knowledge about the behavior of Locusta Migratoria to develop dedicated chips and systems which are integrated into model cars as well as into a commercial car (Volvo XC90) and tested to deliver collision warnings in real traffic scenarios.Gobierno de España TEC2006-15722European Community IST:2001-3809

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    Sensors for Autonomous Systems

    Get PDF
    This Major Qualifying Project seeks to develop a functional scale model of a fully autonomous ferry. Using sensory systems commonly found in autonomous vehicles, we developed an autonomous surface vehicle capable of avoiding obstacles while traveling between docks as well as docking without any human input. We believe that the development of a fully autonomous ferry could help the marine industry as a whole move towards a safer and more autonomous future

    Platform-based design, test and fast verification flow for mixed-signal systems on chip

    Get PDF
    This research is providing methodologies to enhance the design phase from architectural space exploration and system study to verification of the whole mixed-signal system. At the beginning of the work, some innovative digital IPs have been designed to develop efficient signal conditioning for sensor systems on-chip that has been included in commercial products. After this phase, the main focus has been addressed to the creation of a re-usable and versatile test of the device after the tape-out which is close to become one of the major cost factor for ICs companies, strongly linking it to model’s test-benches to avoid re-design phases and multi-environment scenarios, producing a very effective approach to a single, fast and reliable multi-level verification environment. All these works generated different publications in scientific literature. The compound scenario concerning the development of sensor systems is presented in Chapter 1, together with an overview of the related market with a particular focus on the latest MEMS and MOEMS technology devices, and their applications in various segments. Chapter 2 introduces the state of the art for sensor interfaces: the generic sensor interface concept (based on sharing the same electronics among similar applications achieving cost saving at the expense of area and performance loss) versus the Platform Based Design methodology, which overcomes the drawbacks of the classic solution by keeping the generality at the highest design layers and customizing the platform for a target sensor achieving optimized performances. An evolution of Platform Based Design achieved by implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform is therefore presented. ISIF is a highly configurable mixed-signal chip which allows designers to perform an effective design space exploration and to evaluate directly on silicon the system performances avoiding the critical and time consuming analysis required by standard platform based approach. In chapter 3 we describe the design of a smart sensor interface for conditioning next generation MOEMS. The adoption of a new, high performance and high integrated technology allow us to integrate not only a versatile platform but also a powerful ARM processor and various IPs providing the possibility to use the platform not only as a conditioning platform but also as a processing unit for the application. In this chapter a description of the various blocks is given, with a particular emphasis on the IP developed in order to grant the highest grade of flexibility with the minimum area occupation. The architectural space evaluation and the application prototyping with ISIF has enabled an effective, rapid and low risk development of a new high performance platform achieving a flexible sensor system for MEMS and MOEMS monitoring and conditioning. The platform has been design to cover very challenging test-benches, like a laser-based projector device. In this way the platform will not only be able to effectively handle the sensor but also all the system that can be built around it, reducing the needed for further electronics and resulting in an efficient test bench for the algorithm developed to drive the system. The high costs in ASIC development are mainly related to re-design phases because of missing complete top-level tests. Analog and digital parts design flows are separately verified. Starting from these considerations, in the last chapter a complete test environment for complex mixed-signal chips is presented. A semi-automatic VHDL-AMS flow to provide totally matching top-level is described and then, an evolution for fast self-checking test development for both model and real chip verification is proposed. By the introduction of a Python interface, the designer can easily perform interactive tests to cover all the features verification (e.g. calibration and trimming) into the design phase and check them all with the same environment on the real chip after the tape-out. This strategy has been tested on a consumer 3D-gyro for consumer application, in collaboration with SensorDynamics AG

    A Survey of User Interfaces for Robot Teleoperation

    Get PDF
    Robots are used today to accomplish many tasks in society, be it in industry, at home, or as helping tools on tragic incidents. The human-robot systems currently developed span a broad variety of applications and are typically very different from one another. The interaction techniques designed for each system are also very different, although some effort has been directed in defining common properties and strategies for guiding human-robot interaction (HRI) development. This work aims to present the state-of-the-art in teleoperation interaction techniques between robots and their users. By presenting potentially useful design models and motivating discussions on topics to which the research community has been paying little attention lately, we also suggest solutions to some of the design and operational problems being faced in this area

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉÂč with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc

    Towards a Dynamic Vision System - Computational Modelling of Insect Motion Sensitive Neural Systems

    Get PDF
    For motion perception, vision plays an irreplaceable role, which can extract more abundant useful movement features from an unpredictable dynamic environment compared to other sensing modalities. Nowadays, building a dynamic vision system for motion perception in a both reliable and efficient manner is still an open challenge. Millions of years of evolutionary development has provided, in nature, animals that possess robust vision systems capable of motion perception to deal with a variety of aspects of life. Insects, in particular, have a relatively smaller number of visual neurons compared to vertebrates and humans, but can still navigate smartly through visually cluttered and dynamic environments. Understanding the insects' visual processing pathways and methods thus are not only attractive to neural system modellers but also critical in providing effective solutions for future intelligent machines. Originated from biological researches in insect visual systems, this thesis investigates computational modelling of motion sensitive neural systems and potential applications to robotics. This proposes novel modelling of the locust and fly visual systems for sensing looming and translating stimuli. Specifically, the proposed models comprise collision selective neural networks of two lobula giant movement detectors (LGMD1 and LGMD2) in locusts, and translating sensitive neural networks of direction selective neurons (DSNs) in flies, as well as hybrid visual neural systems of their combinations. In all these proposed models, the functionality of ON and OFF pathways is highlighted, which separate visual processing into parallel computation. This works effectively to realise neural characteristics of both the LGMD1 and the LGMD2 in locusts and plays crucial roles in separating the different looming selectivity between the two visual neurons. Such a biologically plausible structure can also implement the fly DSNs for translational movements perception and guide fast motion tracking with a behavioural response to visual fixation. The effectiveness and flexibility of the proposed motion sensitive neural systems have been validated by systematic and comparative experiments ranging from off-line synthetic and real-world tests to on-line bio-robotic tests. The underlying characteristics and functionality of the locust LGMDs and the fly DSNs have been achieved by the proposed models. All the proposed visual models have been successfully realised on the embedded system in a vision-based ground mobile robot. The robot tests have verified the computational simplicity and efficiency of proposed bio-inspired methodologies, which hit at great potential of building neuromorphic sensors in autonomous machines for motion perception in a fast, reliable and low-energy manner

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy
    • 

    corecore