Physiologically-Based Vision Modeling Applications and Gradient Descent-Based Parameter Adaptation of Pulse Coupled Neural Networks

Abstract

In this research, pulse coupled neural networks (PCNNs) are analyzed and evaluated for use in primate vision modeling. An adaptive PCNN is developed that automatically sets near-optimal parameter values to achieve a desired output. For vision modeling, a physiologically motivated vision model is developed from current theoretical and experimental biological data. The biological vision processing principles used in this model, such as spatial frequency filtering, competitive feature selection, multiple processing paths, and state dependent modulation are analyzed and implemented to create a PCNN based feature extraction network. This network extracts luminance, orientation, pitch, wavelength, and motion, and can be cascaded to extract texture, acceleration and other higher order visual features. Theorized and experimentally confirmed cortical information linking schemes, such as state dependent modulation and temporal synchronization are used to develop a PCNN-based visual information fusion network. The network is used to fuse the results of several object detection systems for the purpose of enhanced object detection accuracy. On actual mammograms and FLIR images, the network achieves an accuracy superior to any of the individual object detection systems it fused. Last, this research develops the first fully adaptive PCNN. Given only an input and a desired output, the adaptive PCNN will find all parameter values necessary to approximate that desired output

    Similar works