Sparse representation has attracted great attention because it can greatly
save storage resources and find representative features of data in a
low-dimensional space. As a result, it may be widely applied in engineering
domains including feature extraction, compressed sensing, signal denoising,
picture clustering, and dictionary learning, just to name a few. In this paper,
we propose a spiking sampling network. This network is composed of spiking
neurons, and it can dynamically decide which pixel points should be retained
and which ones need to be masked according to the input. Our experiments
demonstrate that this approach enables better sparse representation of the
original image and facilitates image reconstruction compared to random
sampling. We thus use this approach for compressing massive data from the
dynamic vision sensor, which greatly reduces the storage requirements for event
data