247 research outputs found

    EIE: Efficient Inference Engine on Compressed Deep Neural Network

    Full text link
    State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120x energy saving; Exploiting sparsity saves 10x; Weight sharing gives 8x; Skipping zero activations from ReLU saves another 3x. Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102GOPS/s working directly on a compressed network, corresponding to 3TOPS/s on an uncompressed network, and processes FC layers of AlexNet at 1.88x10^4 frames/sec with a power dissipation of only 600mW. It is 24,000x and 3,400x more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9x, 19x and 3x better throughput, energy efficiency and area efficiency.Comment: External Links: TheNextPlatform: http://goo.gl/f7qX0L ; O'Reilly: https://goo.gl/Id1HNT ; Hacker News: https://goo.gl/KM72SV ; Embedded-vision: http://goo.gl/joQNg8 ; Talk at NVIDIA GTC'16: http://goo.gl/6wJYvn ; Talk at Embedded Vision Summit: https://goo.gl/7abFNe ; Talk at Stanford University: https://goo.gl/6lwuer. Published as a conference paper in ISCA 201

    Modeling of Spatial Uncertainties in the Magnetic Reluctivity

    Full text link
    In this paper a computationally efficient approach is suggested for the stochastic modeling of an inhomogeneous reluctivity of magnetic materials. These materials can be part of electrical machines, such as a single phase transformer (a benchmark example that is considered in this paper). The approach is based on the Karhunen-Lo\`{e}ve expansion. The stochastic model is further used to study the statistics of the self inductance of the primary coil as a quantity of interest.Comment: submitted to COMPE

    Grammar-Based Representations of Large Sparse Binary Matrices

    Get PDF
    Large sparse matrices representation is a fundamental problem in big data processing and analysis. In some applications dealing with large sparse matrices, the I/O of these sparse matrices is the bottleneck of the whole system. To reduce the requirement of memory bandwidth in this scenario, it is important to develop alternative compact representations of large sparse matrices, while facilitating, if possible, matrix operations. In this thesis, we propose two grammar-based methods to compactly represent a sparse binary matrix with the capability of random accessing an element in the matrix. The first approach combines dimension coding (proposed by Yang[12]) with one of raster scan or Hilbert scan, where the so-called directionless grammar is applied. With the power of scanning, dimension coding’s capability of representing 1-D sparse signals can be extended to 2-D sparse matrices. This approach inherits the random accessibility of dimension coding. In the second approach, we will introduce a new concept called Context-free Bipartite Grammar (CFBG) and present a framework wherein large sparse binary matrices can be represented by CFBG. Similar to the traditional concept of Context-free Grammar (CFG), a CFBG consists of a set of production rules. Unlike CFGs, however, the right member of each production rule in a CFBG is a labeled bipartite graph with each edge labeled either as a variable or terminal symbol. As the right-hand side of a production rule is an ordered edge set, CFBG is also directionless. Two bipartite grammar transforms, a Sequential D-Neighborhood Pairing Transform (SNPT) and an Iterative Pairing Transform (IPT), are further presented to convert any binary matrix into a CFBG representing it. Experiments show that compared with popular sparse matrix storage methods such as compressed row storage and quadtree, grammar-based sparse binary matrix representations can reduce the storage requirement of sparse matrices significantly (by a factor of as much as 70)

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Sweep encoding: Serializing space subdivision schemes for optimal slicing

    Get PDF
    Slicing a model (computing thin slices of a geometric or volumetric model with a sweeping plane) is necessary for several applications ranging from 3D printing to medical imaging. This paper introduces a technique designed to compute these slices efficiently, even for huge and complex models. We voxelize the volume of the model at a required resolution and show how to encode this voxelization in an out-of-core octree using a novel Sweep Encoding linearization. This approach allows for efficient slicing with bounded cost per slice. We discuss specific applications, including 3D printing, and compare these octrees’ performance against the standard representations in the literature.This work has been partially funded by the Spanish Ministry of Science and Innovation (MCIN / AEI / 10.13039/501100011033) and FEDER (‘‘A way to make Europe’’) under grant TIN2017- 88515-C2-1-R.Peer ReviewedPostprint (published version

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed
    corecore