Saliency computation models aim to imitate the attention mechanism in the
human visual system. The application of deep neural networks for saliency
prediction has led to a drastic improvement over the last few years. However,
deep models have a high number of parameters which makes them less suitable for
real-time applications. Here we propose a compact yet fast model for real-time
saliency prediction. Our proposed model consists of a modified U-net
architecture, a novel fully connected layer, and central difference
convolutional layers. The modified U-Net architecture promotes compactness and
efficiency. The novel fully-connected layer facilitates the implicit capturing
of the location-dependent information. Using the central difference
convolutional layers at different scales enables capturing more robust and
biologically motivated features. We compare our model with state of the art
saliency models using traditional saliency scores as well as our newly devised
scheme. Experimental results over four challenging saliency benchmark datasets
demonstrate the effectiveness of our approach in striking a balance between
accuracy and speed. Our model can be run in real-time which makes it appealing
for edge devices and video processing