525 research outputs found

    A spiking neural model of HT3D for corner detection

    Get PDF
    La obtención de características de imagen de buena calidad es de notable importancia para la mayoría de las tareas de visión artificial. Se ha demostrado que las primeras capas de la corteza visual humana están dedicadas a la detección de características. La necesidad de estas características ha hecho que la detección de líneas, segmentos y esquinas sea uno de los temas más estudiados en la visión por computador. El HT3D es una variante reciente de la transformación Hough para la detección combinada de esquinas y segmentos de línea en imágenes. Utiliza un espacio de parámetros 3D que permite la detección de segmentos en lugar de líneas enteras. Este espacio también encierra configuraciones canónicas de las esquinas de la imagen, transformando la detección de esquinas en un problema de búsqueda de patrones. Las redes neuronales de picos (SNN) se han propuesto anteriormente para múltiples tareas de procesamiento de imágenes, incluyendo la detección de esquinas y líneas usando la transformación Hough. Siguiendo estas ideas, este documento presenta y describe en detalle un modelo para implementar el HT3D como una Red Neural de Picos (Spiking Neural Network) para la detección de esquinas. Los resultados obtenidos a partir de pruebas exhaustivas de su implementación utilizando imágenes reales evidencian la corrección de la implementación de la Red Neural Spiking HT3D. Tales resultados son comparables a los obtenidos con la implementación regular del HT3D, que a su vez son superiores a otros algoritmos de detección de esquinas.Obtaining good quality image features is of remarkable importance for most computer vision tasks. It has been demonstrated that the first layers of the human visual cortex are devoted to feature detection. The need for these features has made line, segment, and corner detection one of the most studied topics in computer vision. HT3D is a recent variant of the Hough transform for the combined detection of corners and line segments in images. It uses a 3D parameter space that enables the detection of segments instead of whole lines. This space also encloses canonical configurations of image corners, transforming corner detection into a pattern search problem. Spiking neural networks (SNN) have previously been proposed for multiple image processing tasks, including corner and line detection using the Hough transform. Following these ideas, this paper presents and describes in detail a model to implement HT3D as a Spiking Neural Network for corner detection. The results obtained froma thorough testing of its implementation using real images evince the correctness of the Spiking Neural Network HT3D implementation. Such results are comparable to those obtained with the regular HT3D implementation, which are in turn superior to other corner detection algorithms.• Unión Europea. Proyecto Interreg. Beca 0043_EUROAGE_4_E • Gobierno de España. Beca TIN2015-65686-C5-5-R • Junta de Extremadura. Beca GR15120 e IB16090peerReviewe

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Hough Transform Implementation For Event-Based Systems: Concepts and Challenges

    Get PDF
    Hough transform (HT) is one of the most well-known techniques in computer vision that has been the basis of many practical image processing algorithms. HT however is designed to work for frame-based systems such as conventional digital cameras. Recently, event-based systems such as Dynamic Vision Sensor (DVS) cameras, has become popular among researchers. Event-based cameras have a significantly high temporal resolution (1 μs), but each pixel can only detect change and not color. As such, the conventional image processing algorithms cannot be readily applied to event-based output streams. Therefore, it is necessary to adapt the conventional image processing algorithms for event-based cameras. This paper provides a systematic explanation, starting from extending conventional HT to 3D HT, adaptation to event-based systems, and the implementation of the 3D HT using Spiking Neural Networks (SNNs). Using SNN enables the proposed solution to be easily realized on hardware using FPGA, without requiring CPU or additional memory. In addition, we also discuss techniques for optimal SNN-based implementation using efficient number of neurons for the required accuracy and resolution along each dimension, without increasing the overall computational complexity. We hope that this will help to reduce the gap between event-based and frame-based systems

    Application of Image Processing Techniques for Autonomous Cars

    Get PDF
    This paper aims to implement different image processing techniques that will help to control an autonomous car. A multistage pre-processing technique is used to detect the lanes, street signs, and obstacles accurately. The images captured from the autonomous car are processed by the proposed system which is used to control the autonomous vehicle. Canny edge detection was applied to the captured image for detecting the edges, Also, Hough transform was used to detect and mark the lanes immediately to the left and right of the car. This work attempts to highlight the importance of autonomous cars which drastically increase road safety and improve the efficiency of driving compared to human drivers. The performance of the proposed system is observed by the implementation of the autonomous car that is able to detect and classify the stop signs and other vehicles

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Perception understanding action : adding understanding to the perception action cycle with spiking segmentation

    Get PDF
    Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency

    Retina-Based Pipe-Like Object Tracking Implemented Through Spiking Neural Network on a Snake Robot

    Get PDF
    Vision based-target tracking ability is crucial to bio-inspired snake robots for exploring unknown environments. However, it is difficult for the traditional vision modules of snake robots to overcome the image blur resulting from periodic swings. A promising approach is to use a neuromorphic vision sensor (NVS), which mimics the biological retina to detect a target at a higher temporal frequency and in a wider dynamic range. In this study, an NVS and a spiking neural network (SNN) were performed on a snake robot for the first time to achieve pipe-like object tracking. An SNN based on Hough Transform was designed to detect a target with an asynchronous event stream fed by the NVS. Combining the state of snake motion analyzed by the joint position sensors, a tracking framework was proposed. The experimental results obtained from the simulator demonstrated the validity of our framework and the autonomous locomotion ability of our snake robot. Comparing the performances of the SNN model on CPUs and on GPUs, respectively, the SNN model showed the best performance on a GPU under a simplified and synchronous update rule while it possessed higher precision on a CPU in an asynchronous way
    corecore