10 research outputs found

    Active crosstalk reduction system for multiview autostereoscopic displays

    Get PDF
    Multiview autostereoscopic displays are considered as the future of 3DTV. However, these displays suffer from a high level of crosstalk, which negatively impacts quality of experience (QoE). In this paper, we propose a system to improve 3D QoE on multiview autostereoscopic displays. First, the display is characterized in terms of luminance distribution. Then, the luminance profiles are modeled using a limited set of parameters. A Kinect sensor is used to determine the viewer position in front of the display. Finally, the proposed system performs an intelligent on the fly allocation of the output views to minimize the perceived crosstalk. The user preference between 2D and 3D modes and the proposed system is evaluated. Results show that picture quality is significantly improved when compared to the standard 3D mode, for a similar depth perception and visual comfort

    Non-volatile memory as hardware synapse in neuromorphic computing: A first look at reliability issues

    No full text
    A large-scale artificial neural network, a three-layer perceptron, is implemented using two phase-change memory (PCM) devices to encode the weight of each of 164,885 synapses. The PCM conductances are programmed using a crossbar-compatible pulse scheme, and the network is trained to recognize a 5000-example subset of the MNIST handwritten digit database, achieving 82.2% accuracy during training and 82.9% generalization accuracy on unseen test examples. A simulation of the network performance is developed that incorporates a statistical model of the PCM response, allowing quantitative estimation of the tolerance of the network to device variation, defects, and conductance response

    Large-scale neural networks implemented with nonvolatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power)

    No full text
    We review our work towards achieving competitive performance (classification accuracies) for on chip machine learning (ML) of large scale artificial neural networks (ANN) using Non-Volatile Memory (NVM) based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25x) and lower power (from 60–2000x) ML training than GPU–based hardware

    PCM for Neuromorphic Applications: Impact of Device Characteristics on Neural Network Performance

    No full text
    The impact of Phase Change Memory (PCM) as well as other Non-Volatile Memory (NVM) device characteristics on the quantitative classification performance of artificial neural networks is studied. Our results show that any NVM-based neural network — not just those based on PCM — can be expected to be highly resilient to random effects (device variability, yield, and stochasticity), but will be highly sensitive to “gradient” effects that act to steer all synaptic weights. Asymmetry, such as that found with PCM, can be mitigated by an occasional RESET strategy, which can be both infrequent and inaccurate. Algorithms that can finesse some of the imperfections of NVM devices are proposed

    Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element

    No full text
    Using two phase-change memory devices per synapse, a three-layer perceptron network with 164 885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for nonvolatile memory (NVM) + selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity, and asymmetry of the NVM-conductance response. We show that a bidirectional NVM with a symmetric, linear conductance response of high dynamic range is capable of delivering the same high classification accuracies on this problem as a conventional, software-based implementation of this same network

    A Low Power, Fully Event-Based Gesture Recognition System

    Full text link
    We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions
    corecore