76 research outputs found

    Analogue VLSI for temporal frequency analysis of visual data

    Get PDF

    Event-driven Vision and Control for UAVs on a Neuromorphic Chip

    Full text link
    Event-based vision sensors achieve up to three orders of magnitude better speed vs. power consumption trade off in high-speed control of UAVs compared to conventional image sensors. Event-based cameras produce a sparse stream of events that can be processed more efficiently and with a lower latency than images, enabling ultra-fast vision-driven control. Here, we explore how an event-based vision algorithm can be implemented as a spiking neuronal network on a neuromorphic chip and used in a drone controller. We show how seamless integration of event-based perception on chip leads to even faster control rates and lower latency. In addition, we demonstrate how online adaptation of the SNN controller can be realised using on-chip learning. Our spiking neuronal network on chip is the first example of a neuromorphic vision-based controller on chip solving a high-speed UAV control task. The excellent scalability of processing in neuromorphic hardware opens the possibility to solve more challenging visual tasks in the future and integrate visual perception in fast control loops

    Low Latency Event-Based Filtering and Feature Extraction for Dynamic Vision Sensors in Real-Time FPGA Applications

    Get PDF
    Dynamic Vision Sensor (DVS) pixels produce an asynchronous variable-rate address-event output that represents brightness changes at the pixel. Since these sensors produce frame-free output, they are ideal for real-time dynamic vision applications with real-time latency and power system constraints. Event-based ltering algorithms have been proposed to post-process the asynchronous event output to reduce sensor noise, extract low level features, and track objects, among others. These postprocessing algorithms help to increase the performance and accuracy of further processing for tasks such as classi cation using spike-based learning (ie. ConvNets), stereo vision, and visually-servoed robots, etc. This paper presents an FPGA-based library of these postprocessing event-based algorithms with implementation details; speci cally background activity (noise) ltering, pixel masking, object motion detection and object tracking. The latencies of these lters on the Field Programmable Gate Array (FPGA) platform are below 300ns with an average latency reduction of 188% (maximum of 570%) over the software versions running on a desktop PC CPU. This open-source event-based lter IP library for FPGA has been tested on two different platforms and scenarios using different synthesis and implementation tools for Lattice and Xilinx vendors

    Bio-Inspired Optic Flow Sensors for Artificial Compound Eyes.

    Full text link
    Compound eyes in flying insects have been studied to reveal the mysterious cues of vision-based flying mechanisms inside the smallest flying creatures in nature. Especially, researchers in the robotic area have made efforts to transfer the findings into their less than palm-sized unmanned air vehicles, micro-air-vehicles (MAVs). The miniaturized artificial compound eye is one of the key components in this system to provide visual information for navigation. Multi-directional sensing and motion estimation capabilities can give wide field-of-view (FoV) optic flows up to 360 solid angle. By deciphering the wide FoV optic flows, relevant information on the self-status of flight is parsed and utilized for flight command generation. In this work, we realize the wide-field optic flow sensing in a pseudo-hemispherical configuration realized by mounting a number of 2D array optic flow sensors on a flexible PCB module. The flexible PCBs can be bent to form a compound eye shape by origami packaging. From this scheme, the multiple 2D optic flow sensors can provide a modular, expandable configuration to meet low power constraints. The 2D optic flow sensors satisfy the low power constraint by employing a novel bio-inspired algorithm. We have modified the conventional elementary motion detector (EMD), which is known to be a basic operational unit in the insect’s visual pathways. We have implemented a bio-inspired time-stamp-based algorithm in mixed-mode circuits for robust operation. By optimal partitioning of analog to digital signal domains, we can realize the algorithm mostly in digital domain in a column-parallel circuits. Only the feature extraction algorithm is incorporated inside a pixel in analog circuits. In addition, the sensors integrate digital peripheral circuits to provide modular expandability. The on-chip data compressor can reduce the data rate by a factor of 8, so that it can connect a total of 25 optic flow sensors in a 4-wired Serial Peripheral Interface (SPI) bus. The packaged compound eye can transmit full-resolution optic flow data through the single 3MB/sec SPI bus. The fabricated 2D optic flow prototype sensor has achieved the power consumption of 243.3pJ/pixel and the maximum detectable optic flow of 1.96rad/sec at 120fps and 60 FoV.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108841/1/sssjpark_1.pd

    Analog VLSI circuits for inertial sensory systems

    Get PDF
    Supervised by Rahul Sarpeshkar.Also isssued as Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (leaves 67-68).by Maziar Tavakoli Dastjerdi

    Neuromorphic perception for greenhouse technology using event-based sensors

    Get PDF
    Event-Based Cameras (EBCs), unlike conventional cameras, feature independent pixels that asynchronously generate outputs upon detecting changes in their field of view. Short calculations are performed on each event to mimic the brain. The output is a sparse sequence of events with high temporal precision. Conventional computer vision algorithms do not leverage these properties. Thus a new paradigm has been devised. While event cameras are very efficient in representing sparse sequences of events with high temporal precision, many approaches are challenged in applications where a large amount of spatially-temporally rich information must be processed in real-time. In reality, most tasks in everyday life take place in complex and uncontrollable environments, which require sophisticated models and intelligent reasoning. Typical hard problems in real-world scenes are detecting various non-uniform objects or navigation in an unknown and complex environment. In addition, colour perception is an essential fundamental property in distinguishing objects in natural scenes. Colour is a new aspect of event-based sensors, which work fundamentally differently from standard cameras, measuring per-pixel brightness changes per colour filter asynchronously rather than measuring “absolute” brightness at a constant rate. This thesis explores neuromorphic event-based processing methods for high-noise and cluttered environments with imbalanced classes. A fully event-driven processing pipeline was developed for agricultural applications to perform fruits detection and classification to unlock the outstanding properties of event cameras. The nature of features in such data was explored, and methods to represent and detect features were demonstrated. A framework for detecting and classifying features was developed and evaluated on the N-MNIST and Dynamic Vision Sensor (DVS) gesture datasets. The same network was evaluated on laboratory recorded and real-world data with various internal variations for fruits detection such as overlap, variation in size and appearance. In addition, a method to handle highly imbalanced data was developed. We examined the characteristics of spatio-temporal patterns for each colour filter to help expand our understanding of this novel data and explored their applications in classification tasks where colours were more relevant features than shapes and appearances. The results presented in this thesis demonstrate the potential and efficacy of event- based systems by demonstrating the applicability of colour event data and the viability of event-driven classification

    Complexity, the auditory system, and perceptual learning in naïve users of a visual-to-auditory sensory substitution device.

    Get PDF
    PhDSensory substitution devices are a non-invasive visual prostheses that use sound or touch to aid functioning in the blind. Algorithms informed by natural crossmodal correspondences convert and transmit sensory information attributed to an impaired modality back to the user via an unimpaired modality and utilise multisensory networks to activate visual areas of cortex. While behavioural success has been demonstrated in non-visual tasks suing SSDs how they utilise a metamodal brain, organised for function is still a question in research. While imaging studies have shown activation of visual cortex in trained users it is likely that naïve users rely on auditory characteristics of the output signal for functionality and that it is perceptual learning that facilitates crossmodal plasticity. In this thesis I investigated visual-to-auditory sensory substitution in naïve sighted users to assess whether signal complexity and processing in the auditory system facilitates and limits simple recognition tasks. In four experiments evaluating; signal complexity, object resolution, harmonic interference and information load I demonstrate above chance performance in naïve users in all tasks, an increase in generalized learning, limitations in recognition due to principles of auditory scene analysis and capacity limits that hinder performance. Results are looked at from both theoretical and applied perspectives with solutions designed to further inform theory on a multisensory perceptual brain and provide effective training to aid visual rehabilitation.Queen Mary University of Londo

    Fuzzy Mouse Cursor Control System for Computer Users with Spinal Cord Injuries

    Get PDF
    People with severe motor-impairments due to Spinal Cord Injury (SCI) or Spinal Cord Dysfunction (SCD), often experience difficulty with accurate and efficient control of pointing devices (Keates et al., 02). Usually this leads to their limited integration to society as well as limited unassisted control over the environment. The questions “How can someone with severe motor-impairments perform mouse pointer control as accurately and efficiently as an able-bodied person?” and “How can these interactions be advanced through use of Computational Intelligence (CI)?” are the driving forces behind the research described in this paper. Through this research, a novel fuzzy mouse cursor control system (FMCCS) is developed. The goal of this system is to simplify and improve efficiency of cursor control and its interactions on the computer screen by applying fuzzy logic in its decision-making to make disabled Internet users use the networked computer conveniently and easily. The FMCCS core consists of several fuzzy control functions, which define different user interactions with the system. The development of novel cursor control system is based on utilization of motor functions that are still available to most complete paraplegics, having capability of limited vision and breathing control. One of the biggest obstacles of developing human computer interfaces for disabled people focusing primarily on eyesight and breath control is user’s limited strength, stamina, and reaction time. Within the FMCCS developed in this research, these limitations are minimized through the use of a novel pneumatic input device and intelligent control algorithms for soft data analysis, fuzzy logic and user feedback assistance during operation. The new system is developed using a reliable and cheap sensory system and available computing techniques. Initial experiments with healthy and SCI subjects have clearly demonstrated benefits and promising performance of the new system: the FMCCS is accessible for people with severe SCI; it is adaptable to user specific capabilities and wishes; it is easy to learn and operate; point-to-point movement is responsive, precise and fast. The integrated sophisticated interaction features, good movement control without strain and clinical risks, as well the fact that quadriplegics, whose breathing is assisted by a respirator machine, still possess enough control to use the new system with ease, provide a promising framework for future FMCCS applications. The most motivating leverage for further FMCCS development is however, the positive feedback from persons who tested the first system prototype

    Design and Implementation of Bio-inspired Underwater Electrosense

    Get PDF
    Underwater electrosense, manipulating underwater electric field for sensing purpose, is a growing technology bio-inspired by weakly electric fish that can navigate in dark or cluttered water. We studied its theoretical foundations and developed sophisticated sensing algorithms including some first-introduced techniques such as discrete dipole approximation (DDA) and convolutional neural networks (CNN), which were tested and validated by simulation and a planar sensor prototype. This work pave a solid way to applications on practical underwater robots
    corecore