High-dimensional data has become ubiquitous across the sciences but causes
computational and statistical challenges. A common approach for dealing with
these challenges is sparsity. In this paper, we introduce a new concept of
sparsity, called cardinality sparsity. Broadly speaking, we call a tensor
sparse if it contains only a small number of unique values. We show that
cardinality sparsity can improve deep learning and tensor regression both
statistically and computationally. On the way, we generalize recent statistical
theories in those fields