We presented the concept of of a software retina, capable
of significant visual data reduction in combination with
scale and rotation invariance, for applications in egocentric
and robot vision at the first EPIC workshop in Amsterdam
[9]. Our method is based on the mammalian retino-cortical
transform: a mapping between a pseudo-randomly tessellated
retina model (used to sample an input image) and a
CNN. The aim of this first pilot study is to demonstrate a
functional retina-integrated CNN implementation and this
produced the following results: a network using the full
retino-cortical transform yielded an F1 score of 0.80 on a
test set during a 4-way classification task, while an identical
network not using the proposed method yielded an F1
score of 0.86 on the same task. On a 40K node retina the
method reduced the visual data bye×7, the input data to the
CNN by 40% and the number of CNN training epochs by
36%. These results demonstrate the viability of our method
and hint at the potential of exploiting functional traits of
natural vision systems in CNNs. In addition, to the above
study, we present further recent developments in porting
the retina to an Apple iPhone, an implementation in CUDA
C for NVIDIA GPU platforms and extensions of the retina
model we have adopted