30 research outputs found
Method for classifying images in databases through deep convolutional networks
Since 2006, deep structured learning, or more commonly called deep learning or hierarchical learning, has become a new area of research in machine learning. In recent years, techniques developed from deep learning research have impacted on a wide range of information and particularly image processing studies, within traditional and new fields, including key aspects of machine learning and artificial intelligence. This paper proposes an alternative scheme for training data management in CNNs, consisting of selective-adaptive data sampling. By means of experiments with the CIFAR10 database for image classification
TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
TensorDash is a hardware level technique for enabling data-parallel MAC units
to take advantage of sparsity in their input operand streams. When used to
compose a hardware accelerator for deep learning, TensorDash can speedup the
training process while also increasing energy efficiency. TensorDash combines a
low-cost, sparse input operand interconnect comprising an 8-input multiplexer
per multiplier input, with an area-efficient hardware scheduler. While the
interconnect allows a very limited set of movements per operand, the scheduler
can effectively extract sparsity when it is present in the activations, weights
or gradients of neural networks. Over a wide set of models covering various
applications, TensorDash accelerates the training process by
while being more energy-efficient, more energy
efficient when taking on-chip and off-chip memory accesses into account. While
TensorDash works with any datatype, we demonstrate it with both
single-precision floating-point units and bfloat16