Models of visual perception are based on image representations in
cortical area V1 and higher areas which contain many cell layers for feature
extraction. Basic simple, complex and end-stopped cells provide input for line,
edge and keypoint detection. In this paper we present an improved method for
multi-scale line/edge detection based on simple and complex cells. We illustrate
the line/edge representation for object reconstruction, and we present models for
multi-scale face (object) segregation and recognition that can be embedded into
feedforward dorsal and ventral data streams (the “what” and “where” subsystems)
with feedback streams from higher areas for obtaining translation, rotation
and scale invariance