262 research outputs found

    Review of CMOS implementations of the CNN universal machine-type visual microprocessors

    Get PDF
    While in most application areas digital processors can solve problems initially, in some fields their capabilities are very limited. A typical example is vision. Simple animals outperform super-computers in the realization of basic vision tasks. In order to overcome the limitations of these conventional systems, a fundamentally different array architecture is needed. This architecture is based on the new paradigm of analogic cellular (CNN) computing whose most advanced implementation is the so-called CNN universal machine (CNN-UM). Its main components are: a) parallel architecture consisting of an array of locally-connected analog processors; b) a means of storing, locally, pixel-by-pixel, the intermediate computation results, and c) stored on-chip programmability. When implemented as a mixed-signal VLSI chip, the CNN-UM is capable of image processing at rates of trillions of operations per second with very small size and low power consumption. On the other hand, when integrating the adaptive multi-sensor array in the CNN-UM, the resulting sensor+computer array offers unprecedented capabilities. This paper reviews the latest results on CMN-UM chips and systems, and outlines the envisaged roadmap for these computers.European Union IST-1999-19007Comisión Interministerial de Ciencia y Tecnología TIC99-082

    ACE16K: The Third Generation of Mixed-Signal SIMD-CNN ACE Chips Toward VSoCs

    Get PDF
    Today, with 0.18-μm technologies mature and stable enough for mixed-signal design with a large variety of CMOS compatible optical sensors available and with 0.09-μm technologies knocking at the door of designers, we can face the design of integrated systems, instead of just integrated circuits. In fact, significant progress has been made in the last few years toward the realization of vision systems on chips (VSoCs). Such VSoCs are eventually targeted to integrate within a semiconductor substrate the functions of optical sensing, image processing in space and time, high-level processing, and the control of actuators. The consecutive generations of ACE chips define a roadmap toward flexible VSoCs. These chips consist of arrays of mixed-signal processing elements (PEs) which operate in accordance with single instruction multiple data (SIMD) computing architectures and exhibit the functional features of CNN Universal Machines. They have been conceived to cover the early stages of the visual processing path in a fully-parallel manner, and hence more efficiently than DSP-based systems. Across the different generations, different improvements and modifications have been made looking to converge with the newest discoveries of neurobiologists regarding the behavior of natural retinas. This paper presents considerations pertaining to the design of a member of the third generation of ACE chips, namely to the so-called ACE16k chip. This chip, designed in a 0.35-μm standard CMOS technology, contains about 3.75 million transistors and exhibits peak computing figures of 330 GOPS, 3.6 GOPS/mm2 and 82.5 GOPS/W. Each PE in the array contains a reconfigurable computing kernel capable of calculating linear convolutions on 3×3 neighborhoods in less than 1.5 μs, imagewise Boolean combinations in less than 200 ns, imagewise arithmetic operations in about 5 μs, and CNN-like temporal evolutions with a time constant of about 0.5 μs. Unfortunately, the many ideas underlying the design of this chip cannot be covered in a single paper; hence, this paper is focused on, first, placing the ACE16k in the ACE chip roadmap and, then, discussing the most significant modifications of ACE16K versus its predecessors in the family.LOCUST IST2001—38 097VISTA TIC2003—09 817 - C02—01Office of Naval Research N000 140 210 88

    Split and Shift Methodology: Overcoming Hardware Limitations on Cellular Processor Arrays for Image Processing

    Get PDF
    Na era multimedia, o procesado de imaxe converteuse nun elemento de singular importancia nos dispositivos electrónicos. Dende as comunicacións (p.e. telemedicina), a seguranza (p.e. recoñecemento retiniano) ou control de calidade e de procesos industriais (p.e. orientación de brazos articulados, detección de defectos do produto), pasando pola investigación (p.e. seguimento de partículas elementais) e diagnose médica (p.e. detección de células estrañas, identificaciónn de veas retinianas), hai un sinfín de aplicacións onde o tratamento e interpretación automáticas de imaxe e fundamental. O obxectivo último será o deseño de sistemas de visión con capacidade de decisión. As tendencias actuais requiren, ademais, a combinación destas capacidades en dispositivos pequenos e portátiles con resposta en tempo real. Isto propón novos desafíos tanto no deseño hardware como software para o procesado de imaxe, buscando novas estruturas ou arquitecturas coa menor area e consumo de enerxía posibles sen comprometer a funcionalidade e o rendemento

    A versatile sensor interface for programmable vision systems-on-chip

    Get PDF
    This paper describes an optical sensor interface designed for a programmable mixed-signal vision chip. This chip has been designed and manufactured in a standard 0.35μm n-well CMOS technology with one poly layer and five metal layers. It contains a digital shell for control and data interchange, and a central array of 128 × 128 identical cells, each cell corresponding to a pixel. Die size is 11.885 × 12.230mm2 and cell size is 75.7μm × 73.3μm. Each cell contains 198 transistors dedicated to functions like processing, storage, and sensing. The system is oriented to real-time, single-chip image acquisition and processing. Since each pixel performs the basic functions of sensing, processing and storage, data transferences are fully parallel (image-wide). The programmability of the processing functions enables the realization of complex image processing functions based on the sequential application of simpler operations. This paper provides a general overview of the system architecture and functionality, with special emphasis on the optical interface.European Commission IST-1999-19007Office of Naval Research (USA) N00014021088

    A mixed-signal early vision chip with embedded image and programming memories and digital I/O

    Get PDF
    From a system level perspective, this paper presents a 128 × 128 flexible and reconfigurable Focal-Plane Analog Programmable Array Processor, which has been designed as a single chip in a 0.35μm standard digital 1P-5M CMOS technology. The core processing array has been designed to achieve high-speed of operation and large-enough accuracy (∼ 7bit) with low power consumption. The chip includes on-chip program memory to allow for the execution of complex, sequential and/or bifurcation flow image processing algorithms. It also includes the structures and circuits needed to guarantee its embedding into conventional digital hosting systems: external data interchange and control are completely digital. The chip contains close to four million transistors, 90% of them working in analog mode. The chip features up to 330GOPs (Giga Operations per second), and uses the power supply (180GOP/Joule) and the silicon area (3.8 GOPS/mm2) efficiently, as it is able to maintain VGA processing throughputs of 100Frames/s with about 15 basic image processing tasks on each frame

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Image Processing: towards a System on Chip

    Get PDF

    Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras

    Full text link
    Cameras are the primary sensor in automated driving systems. They provide high information density and are optimal for detecting road infrastructure cues laid out for human vision. Surround-view camera systems typically comprise of four fisheye cameras with 190{\deg}+ field of view covering the entire 360{\deg} around the vehicle focused on near-field sensing. They are the principal sensors for low-speed, high accuracy, and close-range sensing applications, such as automated parking, traffic jam assistance, and low-speed emergency braking. In this work, we provide a detailed survey of such vision systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization. We jointly call this the 4R Architecture. We discuss how each component accomplishes a specific aspect and provide a positional argument that they can be synergized to form a complete perception system for low-speed automation. We support this argument by presenting results from previous works and by presenting architecture proposals for such a system. Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY.Comment: Accepted for publication at IEEE Transactions on Intelligent Transportation System
    corecore