10,612 research outputs found
c-trie++: A Dynamic Trie Tailored for Fast Prefix Searches
Given a dynamic set of strings of total length whose characters
are drawn from an alphabet of size , a keyword dictionary is a data
structure built on that provides locate, prefix search, and update
operations on . Under the assumption that
characters fit into a single machine word , we propose a keyword dictionary
that represents in bits of space,
supporting all operations in expected time on an
input string of length in the word RAM model. This data structure is
underlined with an exhaustive practical evaluation, highlighting the practical
usefulness of the proposed data structure, especially for prefix searches - one
of the most elementary keyword dictionary operations
An open and parallel multiresolution framework using block-based adaptive grids
A numerical approach for solving evolutionary partial differential equations
in two and three space dimensions on block-based adaptive grids is presented.
The numerical discretization is based on high-order, central finite-differences
and explicit time integration. Grid refinement and coarsening are triggered by
multiresolution analysis, i.e. thresholding of wavelet coefficients, which
allow controlling the precision of the adaptive approximation of the solution
with respect to uniform grid computations. The implementation of the scheme is
fully parallel using MPI with a hybrid data structure. Load balancing relies on
space filling curves techniques. Validation tests for 2D advection equations
allow to assess the precision and performance of the developed code.
Computations of the compressible Navier-Stokes equations for a temporally
developing 2D mixing layer illustrate the properties of the code for nonlinear
multi-scale problems. The code is open source
EIE: Efficient Inference Engine on Compressed Deep Neural Network
State-of-the-art deep neural networks (DNNs) have hundreds of millions of
connections and are both computationally and memory intensive, making them
difficult to deploy on embedded systems with limited hardware resources and
power budgets. While custom hardware helps the computation, fetching weights
from DRAM is two orders of magnitude more expensive than ALU operations, and
dominates the required power.
Previously proposed 'Deep Compression' makes it possible to fit large DNNs
(AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by
pruning the redundant connections and having multiple connections share the
same weight. We propose an energy efficient inference engine (EIE) that
performs inference on this compressed network model and accelerates the
resulting sparse matrix-vector multiplication with weight sharing. Going from
DRAM to SRAM gives EIE 120x energy saving; Exploiting sparsity saves 10x;
Weight sharing gives 8x; Skipping zero activations from ReLU saves another 3x.
Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to
CPU and GPU implementations of the same DNN without compression. EIE has a
processing power of 102GOPS/s working directly on a compressed network,
corresponding to 3TOPS/s on an uncompressed network, and processes FC layers of
AlexNet at 1.88x10^4 frames/sec with a power dissipation of only 600mW. It is
24,000x and 3,400x more energy efficient than a CPU and GPU respectively.
Compared with DaDianNao, EIE has 2.9x, 19x and 3x better throughput, energy
efficiency and area efficiency.Comment: External Links: TheNextPlatform: http://goo.gl/f7qX0L ; O'Reilly:
https://goo.gl/Id1HNT ; Hacker News: https://goo.gl/KM72SV ; Embedded-vision:
http://goo.gl/joQNg8 ; Talk at NVIDIA GTC'16: http://goo.gl/6wJYvn ; Talk at
Embedded Vision Summit: https://goo.gl/7abFNe ; Talk at Stanford University:
https://goo.gl/6lwuer. Published as a conference paper in ISCA 201
- …