33,318 research outputs found

    A CMOS image processing sensor for the detection of image features

    Get PDF
    A compact CMOS vision sensor for the detection of higher level image features, such as corners, junctions (T-, X-, Y-type) and linestops, is presented. The on-chip detection of these features significantly reduces the data amount and hence facilitates the subsequent processing of pattern recognition. The sensor performs a series of template matching operations in an analog/digital mixed mode for various kinds of image filtering operations including thinning, orientation decomposition, error correction, set operations, and others. The analog operations are done in the current domain. A design procedure, based on the formulation of the transistor mismatch, is applied to fulfill both accuracy and speed requirements. The architecture resembles a CNN-UM that can be programmed by a 30-bit word. The results of an experimental 16x16 pixel chip demonstrate that the sensor is able to detect features at high speed due to the pixel-parallel operation. Over 270 individual processing operations are performed in about 54 µsec

    SCNN: A General Distribution based Statistical Convolutional Neural Network with Application to Video Object Detection

    Full text link
    Various convolutional neural networks (CNNs) were developed recently that achieved accuracy comparable with that of human beings in computer vision tasks such as image recognition, object detection and tracking, etc. Most of these networks, however, process one single frame of image at a time, and may not fully utilize the temporal and contextual correlation typically present in multiple channels of the same image or adjacent frames from a video, thus limiting the achievable throughput. This limitation stems from the fact that existing CNNs operate on deterministic numbers. In this paper, we propose a novel statistical convolutional neural network (SCNN), which extends existing CNN architectures but operates directly on correlated distributions rather than deterministic numbers. By introducing a parameterized canonical model to model correlated data and defining corresponding operations as required for CNN training and inference, we show that SCNN can process multiple frames of correlated images effectively, hence achieving significant speedup over existing CNN models. We use a CNN based video object detection as an example to illustrate the usefulness of the proposed SCNN as a general network model. Experimental results show that even a non-optimized implementation of SCNN can still achieve 178% speedup over existing CNNs with slight accuracy degradation.Comment: AAAI1

    CSGNet: Neural Shape Parser for Constructive Solid Geometry

    Full text link
    We present a neural architecture that takes as input a 2D or 3D shape and outputs a program that generates the shape. The instructions in our program are based on constructive solid geometry principles, i.e., a set of boolean operations on shape primitives defined recursively. Bottom-up techniques for this shape parsing task rely on primitive detection and are inherently slow since the search space over possible primitive combinations is large. In contrast, our model uses a recurrent neural network that parses the input shape in a top-down manner, which is significantly faster and yields a compact and easy-to-interpret sequence of modeling instructions. Our model is also more effective as a shape detector compared to existing state-of-the-art detection techniques. We finally demonstrate that our network can be trained on novel datasets without ground-truth program annotations through policy gradient techniques.Comment: Accepted at CVPR-201

    Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

    Full text link
    Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications. As we try to solve more advanced problems, increasing demands for computing and power resources has become inevitable. Spiking neural networks (SNNs) have attracted widespread interest as the third-generation of neural networks due to their event-driven and low-powered nature. SNNs, however, are difficult to train, mainly owing to their complex dynamics of neurons and non-differentiable spike operations. Furthermore, their applications have been limited to relatively simple tasks such as image classification. In this study, we investigate the performance degradation of SNNs in a more challenging regression problem (i.e., object detection). Through our in-depth analysis, we introduce two novel methods: channel-wise normalization and signed neuron with imbalanced threshold, both of which provide fast and accurate information transmission for deep SNNs. Consequently, we present a first spiked-based object detection model, called Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic chip consumes approximately 280 times less energy than Tiny YOLO and converges 2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
    • …
    corecore