The work described in this paper is inspired by SpikeNET, a system
developed to test the feasibility of using rank-order codes in modelling largescale
networks of asynchronously spiking neurons. The rank-order code theory
proposed by Thorpe concerns the encoding of information by a population of
spiking neurons in the primate visual system. The theory proposes using the order
of firing across a network of asynchronously firing spiking neurons as a neural
code for information transmission. In this paper we aim to measure the perceptual
similarity between the image input to a model retina, based on that originally
designed and developed by VanRullen and Thorpe, and an image reconstructed
from the rank-order encoding of the input image. We use an objective metric
originally proposed by Petrovic to estimate perceptual edge preservation in image
fusion which, after minor modifcations, is very much suited to our purpose. The
results show that typically 75% of the edge information of the input stimulus is
retained in the reconstructed image, and we show how the available information
increases with successive spikes in the rank-order code