6 research outputs found

    On Synthetic AER Generation

    Get PDF
    In this paper several software methods for generating synthetic AER streams from images stored in a computer's memory are proposed and evaluated. Evaluation criteria cover execution time, distribution error and how they perform with two receiver cell models. A hardware PCI to AER interface is presentedMinisterio de Ciencia y Tecnolog铆a TIC1999-0446-C02-02Ministerio de Ciencia y Tecnolog铆a TIC2000-0406-P4-05Ministerio de Ciencia y Tecnolog铆a FIT-07000/2002/921Ministerio de Ciencia y Tecnolog铆a TIC2002-10878-EMinisterio de Ciencia y Tecnolog铆a TIC-2003-08164-C03-01Commission of the European Communities IST-2001-3412

    Two Hardware Implementations of the Exhaustive Synthetic AER Generation Method

    Get PDF
    Address-Event-Representation (AER) is a communications protocol for transferring images between chips, originally developed for bio-inspired image processing systems. In [6], [5] various software methods for synthetic AER generation were presented. But in neuro-inspired research field, hardware methods are needed to generate AER from laptop computers. In this paper two real time implementations of the exhaustive method, proposed in [6], [5], are presented. These implementations can transmit, through AER bus, images stored in a computer using USB-AER board developed by our RTCAR group for the CAVIAR EU project.Commission of the European Communities IST-2001-34124 (CAVIAR)Comisi贸n Interministerial de Ciencia y Tecnolog铆a TIC-2003-08164-C03-0

    AER tools for Communications and Debugging

    Get PDF
    Address-event-representation (AER) is a communications protocol for transferring spikes between bio-inspired chips. Such systems may consist of a hierarchical structure with several chips that transmit spikes among them in real time, while performing some processing. To develop and test AER based systems it is convenient to have a set of instruments that would allow to: generate AER streams, monitor the output produced by neural chips and modify the spike stream produced by an emitting chip to adapt it to the requirements of the receiving elements. In this paper we present a set of tools that implement these functions developed in the CAVIAR EU projectUni贸n Europea IST-2001-34124 (CAVIAR)Ministerio de Ciencia y Tecnolog铆a TIC-2003-08164-C03-0

    Inter-spike-intervals Analysis of Poisson Like Hardware Synthetic AER Generation

    Get PDF
    Address-Event-Representation (AER) is a communication protocol for transferring images between chips, originally developed for bio-inspired image processing systems. Such systems may consist of a complicated hierarchical structure with many chips that transmit images among them in real time, while performing some processing (for example, convolutions). In developing AER based systems it is very convenient to have available some kind of means of generating AER streams from on-computer stored images. In this paper we present a hardware method for generating AER streams in real time from a sequence of images stored in a computer鈥檚 memory. The Kolmogorov-Smirnov test has been applied to quantify that this method follows a Poisson distribution of the spikes. A USB-AER board and a PCI-AER board, developed by our RTCAR group, have been used.European Commission IST-2001-34124Ministerio de Ciencia y Tecnolog铆a TIC-2003-08164-C03-0

    Address-event imagers for sensor networks: evaluation and modeling

    Get PDF

    Evaluaci贸n y an谩lisis de una aproximaci贸n a la fusi贸n sensorial neuronal mediante el uso de sensores pulsantes de visi贸n / audio y redes neuronales de convoluci贸n

    Get PDF
    En este trabajo se pretende avanzar en el conocimiento y posibles implementaciones hardware de los mecanismos de Deep Learning, as铆 como el uso de la fusi贸n sensorial de forma eficiente utilizando dichos mecanismos. Para empezar, se realiza un an谩lisis y estudio de los lenguajes de programaci贸n paralela actuales, as铆 como de los mecanismos de Deep Learning para la fusi贸n sensorial de visi贸n y audio utilizando sensores neurom贸rficos para el uso en plataformas de FPGA. A partir de estos estudios, se proponen en primer lugar soluciones implementadas en OpenCL as铆 como en hardware dedicado, descrito en systemverilog, para la aceleraci贸n de algoritmos de Deep Learning comenzando con el uso de un sensor de visi贸n como entrada. Se analizan los resultados y se realiza una comparativa entre ellos. A continuaci贸n se a帽ade un sensor de audio y se proponen mecanismos estad铆sticos cl谩sicos, que sin ofrecer capacidad de aprendizaje, permiten integrar la informaci贸n de ambos sensores, analizando los resultados obtenidos junto con sus limitaciones. Como colof贸n de este trabajo, para dotar al sistema de la capacidad de aprendizaje, se utilizan mecanismos de Deep Learning, en particular las CNN1, para fusionar la informaci贸n audiovisual y entrenar el modelo para desarrollar una tarea espec铆fica. Al final se eval煤a el rendimiento y eficiencia de dichos mecanismos obteniendo conclusiones y unas proposiciones de mejora que se dejar谩n indicadas para ser implementadas como trabajos futuros.In this work it is intended to advance on the knowledge and possible hardware implementations of the Deep Learning mechanisms, as well as on the use of sensory fusi贸n efficiently using such mechanisms. At the beginning, it is performed an analysis and study of the current parallel programing, furthermore of the Deep Learning mechanisms for audiovisual sensory fusion using neuromorphic sensor on FPGA platforms. Based on these studies, first of all it is proposed solution implemented on OpenCL as well as dedicated hardware, described on systemverilog, for the acceleration of Deep Learning algorithms, starting with the use of a vision sensor as input. The results are analysed and a comparison between them has been made. Next, an audio sensor is added and classic statistical mechanisms are proposed, which, without providing learning capacity, allow the integration of information from both sensors, analysing the results obtained along with their limitations. Finally, in order to provide the system with learning capacity, Deep Learning mechanisms, in particular CNN, are used to merge audiovisual information and train the model to develop a specific task. In the end, the performance and efficiency of these mechanisms have been evaluated, obtaining conclusions and proposing improvements that will be indicated to be implemented as future works
    corecore