In classification problems with large output spaces (up to millions of
labels), the last layer can require an enormous amount of memory. Using sparse
connectivity would drastically reduce the memory requirements, but as we show
below, it can result in much diminished predictive performance of the model.
Fortunately, we found that this can be mitigated by introducing a penultimate
layer of intermediate size. We further demonstrate that one can constrain the
connectivity of the sparse layer to be uniform, in the sense that each output
neuron will have the exact same number of incoming connections. This allows for
efficient implementations of sparse matrix multiplication and connection
redistribution on GPU hardware. Via a custom CUDA implementation, we show that
the proposed approach can scale to datasets with 670,000 labels on a single
commodity GPU with only 4GB memory