17 research outputs found

    Event-based neuromorphic stereo vision

    Full text link

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Low-power CMOS digital-pixel Imagers for high-speed uncooled PbSe IR applications

    Get PDF
    This PhD dissertation describes the research and development of a new low-cost medium wavelength infrared MWIR monolithic imager technology for high-speed uncooled industrial applications. It takes the baton on the latest technological advances in the field of vapour phase deposition (VPD) PbSe-based medium wavelength IR (MWIR) detection accomplished by the industrial partner NIT S.L., adding fundamental knowledge on the investigation of novel VLSI analog and mixed-signal design techniques at circuit and system levels for the development of the readout integrated device attached to the detector. The work supports on the hypothesis that, by the use of the preceding design techniques, current standard inexpensive CMOS technologies fulfill all operational requirements of the VPD PbSe detector in terms of connectivity, reliability, functionality and scalability to integrate the device. The resulting monolithic PbSe-CMOS camera must consume very low power, operate at kHz frequencies, exhibit good uniformity and fit the CMOS read-out active pixels in the compact pitch of the focal plane, all while addressing the particular characteristics of the MWIR detector: high dark-to-signal ratios, large input parasitic capacitance values and remarkable mismatching in PbSe integration. In order to achieve these demands, this thesis proposes null inter-pixel crosstalk vision sensor architectures based on a digital-only focal plane array (FPA) of configurable pixel sensors. Each digital pixel sensor (DPS) cell is equipped with fast communication modules, self-biasing, offset cancellation, analog-to-digital converter (ADC) and fixed pattern noise (FPN) correction. In-pixel power consumption is minimized by the use of comprehensive MOSFET subthreshold operation. The main aim is to potentiate the integration of PbSe-based infra-red (IR)-image sensing technologies so as to widen its use, not only in distinct scenarios, but also at different stages of PbSe-CMOS integration maturity. For this purpose, we posit to investigate a comprehensive set of functional blocks distributed in two parallel approaches: • Frame-based “Smart” MWIR imaging based on new DPS circuit topologies with gain and offset FPN correction capabilities. This research line exploits the detector pitch to offer fully-digital programmability at pixel level and complete functionality with input parasitic capacitance compensation and internal frame memory. • Frame-free “Compact”-pitch MWIR vision based on a novel DPS lossless analog integrator and configurable temporal difference, combined with asynchronous communication protocols inside the focal plane. This strategy is conceived to allow extensive pitch compaction and readout speed increase by the suppression of in-pixel digital filtering, and the use of dynamic bandwidth allocation in each pixel of the FPA. In order make the electrical validation of first prototypes independent of the expensive PbSe deposition processes at wafer level, investigation is extended as well to the development of affordable sensor emulation strategies and integrated test platforms specifically oriented to image read-out integrated circuits. DPS cells, imagers and test chips have been fabricated and characterized in standard 0.15μm 1P6M, 0.35μm 2P4M and 2.5μm 2P1M CMOS technologies, all as part of research projects with industrial partnership. The research has led to the first high-speed uncooled frame-based IR quantum imager monolithically fabricated in a standard VLSI CMOS technology, and has given rise to the Tachyon series [1], a new line of commercial IR cameras used in real-time industrial, environmental and transportation control systems. The frame-free architectures investigated in this work represent a firm step forward to push further pixel pitch and system bandwidth up to the limits imposed by the evolving PbSe detector in future generations of the device.La present tesi doctoral descriu la recerca i el desenvolupament d'una nova tecnologia monolítica d'imatgeria infraroja de longitud d'ona mitja (MWIR), no refrigerada i de baix cost, per a usos industrials d'alta velocitat. El treball pren el relleu dels últims avenços assolits pel soci industrial NIT S.L. en el camp dels detectors MWIR de PbSe depositats en fase vapor (VPD), afegint-hi coneixement fonamental en la investigació de noves tècniques de disseny de circuits VLSI analògics i mixtes pel desenvolupament del dispositiu integrat de lectura unit al detector pixelat. Es parteix de la hipòtesi que, mitjançant l'ús de les esmentades tècniques de disseny, les tecnologies CMOS estàndard satisfan tots els requeriments operacionals del detector VPD PbSe respecte a connectivitat, fiabilitat, funcionalitat i escalabilitat per integrar de forma econòmica el dispositiu. La càmera PbSe-CMOS resultant ha de consumir molt baixa potència, operar a freqüències de kHz, exhibir bona uniformitat, i encabir els píxels actius CMOS de lectura en el pitch compacte del pla focal de la imatge, tot atenent a les particulars característiques del detector: altes relacions de corrent d'obscuritat a senyal, elevats valors de capacitat paràsita a l'entrada i dispersions importants en el procés de fabricació. Amb la finalitat de complir amb els requisits previs, es proposen arquitectures de sensors de visió de molt baix acoblament interpíxel basades en l'ús d'una matriu de pla focal (FPA) de píxels actius exclusivament digitals. Cada píxel sensor digital (DPS) està equipat amb mòduls de comunicació d'alta velocitat, autopolarització, cancel·lació de l'offset, conversió analògica-digital (ADC) i correcció del soroll de patró fixe (FPN). El consum en cada cel·la es minimitza fent un ús exhaustiu del MOSFET operant en subllindar. L'objectiu últim és potenciar la integració de les tecnologies de sensat d'imatge infraroja (IR) basades en PbSe per expandir-ne el seu ús, no només a diferents escenaris, sinó també en diferents estadis de maduresa de la integració PbSe-CMOS. En aquest sentit, es proposa investigar un conjunt complet de blocs funcionals distribuïts en dos enfocs paral·lels: - Dispositius d'imatgeria MWIR "Smart" basats en frames utilitzant noves topologies de circuit DPS amb correcció de l'FPN en guany i offset. Aquesta línia de recerca exprimeix el pitch del detector per oferir una programabilitat completament digital a nivell de píxel i plena funcionalitat amb compensació de la capacitat paràsita d'entrada i memòria interna de fotograma. - Dispositius de visió MWIR "Compact"-pitch "frame-free" en base a un novedós esquema d'integració analògica en el DPS i diferenciació temporal configurable, combinats amb protocols de comunicació asíncrons dins del pla focal. Aquesta estratègia es concep per permetre una alta compactació del pitch i un increment de la velocitat de lectura, mitjançant la supressió del filtrat digital intern i l'assignació dinàmica de l'ample de banda a cada píxel de l'FPA. Per tal d'independitzar la validació elèctrica dels primers prototips respecte a costosos processos de deposició del PbSe sensor a nivell d'oblia, la recerca s'amplia també al desenvolupament de noves estratègies d'emulació del detector d'IR i plataformes de test integrades especialment orientades a circuits integrats de lectura d'imatge. Cel·les DPS, dispositius d'imatge i xips de test s'han fabricat i caracteritzat, respectivament, en tecnologies CMOS estàndard 0.15 micres 1P6M, 0.35 micres 2P4M i 2.5 micres 2P1M, tots dins el marc de projectes de recerca amb socis industrials. Aquest treball ha conduït a la fabricació del primer dispositiu quàntic d'imatgeria IR d'alta velocitat, no refrigerat, basat en frames, i monolíticament fabricat en tecnologia VLSI CMOS estàndard, i ha donat lloc a Tachyon, una nova línia de càmeres IR comercials emprades en sistemes de control industrial, mediambiental i de transport en temps real.Postprint (published version

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem Schlüsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, übertreffen Gehirne von Säugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfähigsten Maschinen. Industrieroboter sind sehr schnell und präzise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie für die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfähig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits überlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen Repräsentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras übertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch für eine vergleichende Studie zu sample-basierten Planern verwendet. Ergänzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewählt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung für dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage für neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg für die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK

    Real-time Neuromorphic Visual Pre-Processing and Dynamic Saliency

    Get PDF
    The human brain is by far the most computationally complex, efficient, and reliable computing system operating under such low-power, small-size, and light-weight specifications. Within the field of neuromorphic engineering, we seek to design systems with facsimiles to that of the human brain with means to reach its desirable properties. In this doctoral work, the focus is within the realm of vision, specifically visual saliency and related visual tasks with bio-inspired, real-time processing. The human visual system, from the retina through the visual cortical hierarchy, is responsible for extracting visual information and processing this information, forming our visual perception. This visual information is transmitted through these various layers of the visual system via spikes (or action potentials), representing information in the temporal domain. The objective is to exploit this neurological communication protocol and functionality within the systems we design. This approach is essential for the advancement of autonomous, mobile agents (i.e. drones/MAVs, cars) which must perform visual tasks under size and power constraints in which traditional CPU or GPU implementations to not suffice. Although the high-level objective is to design a complete visual processor with direct physical and functional correlates to the human visual system, we focus on three specific tasks. The first focus of this thesis is the integration of motion into a biologically-plausible proto-object-based visual saliency model. Laurent Itti, one of the pioneers in the field, defines visual saliency as ``the distinct subjective perceptual quality which makes some items in the world stand out from their neighbors and immediately grab our attention.'' From humans to insects, visual saliency is important for the extraction of only interesting regions of visual stimuli for further processing. Prior to this doctoral work, Russel et al. \cite{russell2014model} designed a model of proto-object-based visual saliency with biological correlates. This model was designed for computing saliency only on static images. However, motion is a naturally occurring phenomena that plays an essential role in both human and animal visual processing. Henceforth, the most ideal model of visual saliency should consider motion that may be exhibited within the visual scene. In this work a novel dynamic proto-object-based visual saliency is described which extends the Russel et. al. saliency model to consider not only static, but also temporal information. This model was validated by using metrics for determining how accurate the model is in predicting human eye fixations and saccades on a public dataset of videos with attached eye tracking data. This model outperformed other state-of-the-art visual saliency models in computing dynamic visual saliency. Such a model that can accurately predict where humans look, can serve as a front-end component to other visual processors performing tasks such as object detection and recognition, or object tracking. In doing so it can reduce throughput and increase processing speed for such tasks. Furthermore, it has more obvious applications in artificial intelligence in mimicking the functionality of the human visual system. The second focus of this thesis is the implementation of this visual saliency model on an FPGA (Field Programmable Gate Array) for real-time processing. Initially, this model was designed within MATLAB, a software-based approach running on a CPU, which limits the processing speed and consumes unnecessary amounts of power due to overhead. This is detrimental for integration with an autonomous, mobile system which must operate in real-time. This novel FPGA implementation allows for a low-power, high-speed approach to computing visual saliency. There are a few existing FPGA-based implementations of visual saliency, and of those, none are based on the notion of proto-objects. This work presents the first, to our knowledge, FPGA implementation of an object-based visual saliency model. Such an FPGA implementation allows for the low-power, light-weight, and small-size specifications that we seek within the field of neuromorphic engineering. For validating the FPGA model, the same metrics are used for determining the extent to which it predicts human eye saccades and fixations. We compare this hardware implementation to the software model for validation. The third focus of this thesis is the design of a generic neuromorphic platform both on FPGA and VLSI (Very-Large-Scale-Integration) technology for performing visual tasks, including those necessary in the computation of the visual saliency. Visual processing tasks such as image filtering and image dewarping are demonstrated via this novel neuromorphic technology consisting of an array of hardware-based generalized integrate-and-fire neurons. It allows the visual saliency model's computation to be offloaded onto this hardware-based architecture. We first demonstrate an emulation of this neuromorphic system on FPGA demonstrating its capability of dewarping and filtering tasks as well as integration with a neuromorphic camera called the ATIS (Asynchronous Time-based Image Sensor). We then demonstrate the neuromorphic platform implemented in CMOS technology, specifically designed for low-mismatch, high-density, and low-power. Such a VLSI technology-based platform further bridges the gap between engineering and biology and moves us closer towards developing a complete neuromorphic visual processor

    Advanced Applications of Rapid Prototyping Technology in Modern Engineering

    Get PDF
    Rapid prototyping (RP) technology has been widely known and appreciated due to its flexible and customized manufacturing capabilities. The widely studied RP techniques include stereolithography apparatus (SLA), selective laser sintering (SLS), three-dimensional printing (3DP), fused deposition modeling (FDM), 3D plotting, solid ground curing (SGC), multiphase jet solidification (MJS), laminated object manufacturing (LOM). Different techniques are associated with different materials and/or processing principles and thus are devoted to specific applications. RP technology has no longer been only for prototype building rather has been extended for real industrial manufacturing solutions. Today, the RP technology has contributed to almost all engineering areas that include mechanical, materials, industrial, aerospace, electrical and most recently biomedical engineering. This book aims to present the advanced development of RP technologies in various engineering areas as the solutions to the real world engineering problems

    Technology 2000, volume 1

    Get PDF
    The purpose of the conference was to increase awareness of existing NASA developed technologies that are available for immediate use in the development of new products and processes, and to lay the groundwork for the effective utilization of emerging technologies. There were sessions on the following: Computer technology and software engineering; Human factors engineering and life sciences; Information and data management; Material sciences; Manufacturing and fabrication technology; Power, energy, and control systems; Robotics; Sensors and measurement technology; Artificial intelligence; Environmental technology; Optics and communications; and Superconductivity

    First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    Get PDF
    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered
    corecore