The problem of low complexity, close to optimal, channel decoding of linear
codes with short to moderate block length is considered. It is shown that deep
learning methods can be used to improve a standard belief propagation decoder,
despite the large example space. Similar improvements are obtained for the
min-sum algorithm. It is also shown that tying the parameters of the decoders
across iterations, so as to form a recurrent neural network architecture, can
be implemented with comparable results. The advantage is that significantly
less parameters are required. We also introduce a recurrent neural decoder
architecture based on the method of successive relaxation. Improvements over
standard belief propagation are also observed on sparser Tanner graph
representations of the codes. Furthermore, we demonstrate that the neural
belief propagation decoder can be used to improve the performance, or
alternatively reduce the computational complexity, of a close to optimal
decoder of short BCH codes.Comment: Accepted To IEEE Journal Of Selected Topics In Signal Processin