1 research outputs found

    Sparse deep neural networks for embedded intelligence

    No full text
    Deep learning is becoming more widespread due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their application. This work addresses the problem of memory requirements by proposing a regularization approach to compress the memory footprint of the models. It is shown that the sparsity-inducing regularization problem can be solved effectively using an enhanced stochastic variance reduced gradient optimization approach. Experimental evaluation of our approach shows that it can reduce the memory requirements both in the convolutional and fully connected layers by up to 300×\times without affecting overall test accuracy
    corecore