216 research outputs found
Accelerating Training of Deep Neural Networks via Sparse Edge Processing
We propose a reconfigurable hardware architecture for deep neural networks
(DNNs) capable of online training and inference, which uses algorithmically
pre-determined, structured sparsity to significantly lower memory and
computational requirements. This novel architecture introduces the notion of
edge-processing to provide flexibility and combines junction pipelining and
operational parallelization to speed up training. The overall effect is to
reduce network complexity by factors up to 30x and training time by up to 35x
relative to GPUs, while maintaining high fidelity of inference results. This
has the potential to enable extensive parameter searches and development of the
largely unexplored theoretical foundation of DNNs. The architecture
automatically adapts itself to different network sizes given available hardware
resources. As proof of concept, we show results obtained for different bit
widths.Comment: Presented at the 26th International Conference on Artificial Neural
Networks (ICANN) 2017 in Alghero, Ital
Concatenated Classic and Neural (CCN) Codes: ConcatenatedAE
Small neural networks (NNs) used for error correction were shown to improve
on classic channel codes and to address channel model changes. We extend the
code dimension of any such structure by using the same NN under one-hot
encoding multiple times, then serially-concatenated with an outer classic code.
We design NNs with the same network parameters, where each Reed-Solomon
codeword symbol is an input to a different NN. Significant improvements in
block error probabilities for an additive Gaussian noise channel as compared to
the small neural code are illustrated, as well as robustness to channel model
changes.Comment: 6 pages, IEEE WCNC 202
Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …