424 research outputs found

    Novel image enhancement technique using shunting inhibitory cellular neural networks

    Get PDF
    This paper describes a method for improving image quality in a color CMOS image sensor. The technique simultaneously acts to compress the dynamic range, reorganize the signal to improve visibility, suppress noise, identify local features, achieve color constancy, and lightness rendition. An efficient hardware architecture and a rigorous analysis of the different modules are presented to achieve high quality CMOS digital camera

    Training Methods for Shunting Inhibitory Artificial Neural Networks

    Get PDF
    This project investigates a new class of high-order neural networks called shunting inhibitory artificial neural networks (SIANN\u27s) and their training methods. SIANN\u27s are biologically inspired neural networks whose dynamics are governed by a set of coupled nonlinear differential equations. The interactions among neurons are mediated via a nonlinear mechanism called shunting inhibition, which allows the neurons to operate as adaptive nonlinear filters. The project\u27s main objective is to devise training methods, based on error backpropagation type of algorithms, which would allow SIANNs to be trained to perform feature extraction for classification and nonlinear regression tasks. The training algorithms developed will simplify the task of designing complex, powerful neural networks for applications in pattern recognition, image processing, signal processing, machine vision and control. The five training methods adapted in this project for SIANN\u27s are error-backpropagation based on gradient descent (GD), gradient descent with variable learning rate (GDV), gradient descent with momentum (GDM), gradient descent with direct solution step (GDD) and APOLEX algorithm. SIANN\u27s and these training methods are implemented in MATLAB. Testing on several benchmarks including the parity problems, classification of 2-D patterns, and function approximation shows that SIANN\u27s trained using these methods yield comparable or better performance with multilayer perceptrons (MLP\u27s)

    A wide dynamic range cmos imager with extended shunting inhibition image processing capabilities

    Get PDF
    A CMOS imager based on a novel mixed-mode VLSI implementation of biologically inspired shunting inhibition vision models is presented. It can achieve a wide range of image processing tasks such as image enhancement or edge detection via a programmable shunting inhibition processor. Its most important feature is a gain control mechanism allowing local and global adaptation to the mean input light intensity. This feature is shown to be very suitable for wide dynamic range imager

    SICNN optimisation, two dimensional implementation and comparison

    Get PDF
    The study investigates the process of optimisation, implementation and comparison of a Shunting Inhibitory Cellular Neural Network (SICNN) for Edge Detection. Shunting inhibition is lateral inhibition where the inhibition function is nonlinear. Cellular Neural Networks are locally interconnected nonlinear, parallel networks which can exist as either discrete time or continuous networks. The name given to Cellular Neural Networks that use shunting inhibition as their nonlinear cell interactions are called Shunting Inhibitory Cellular Neural Networks. This project report examines some existing edge detectors and thresholding techniques. Then it describes the optimisation of the connection weight matrix for SICNN with Complementary Output Processing and SICNN with Division Output Processing. The parameter values of this optimisation as well as the thresholding methods studied are used in software implementation of the SICNN. This-two dimensional SICNN edge detector is then compared to some other common edge detectors, namely the Sobel and Canny detectors. It was found that the SICNN with complementary output processing performed as well or better than the two other detectors. The SICNN was also very flexible in being able to be easily modified to deal with different image conditions

    Biologically Plausible Neural Circuits for Realization of Maximum Operations

    Get PDF
    Object recognition in the visual cortex is based on a hierarchical architecture, in which specialized brain regions along the ventral pathway extract object features of increasing levels of complexity, accompanied by greater invariance in stimulus size, position, and orientation. Recent theoretical studies postulate a non-linear pooling function, such as the maximum (MAX) operation could be fundamental in achieving such invariance. In this paper, we are concerned with neurally plausible mechanisms that may be involved in realizing the MAX operation. Four canonical circuits are proposed, each based on neural mechanisms that have been previously discussed in the context of cortical processing. Through simulations and mathematical analysis, we examine the relative performance and robustness of these mechanisms. We derive experimentally verifiable predictions for each circuit and discuss their respective physiological considerations

    A generalised feedforward neural network architecture and its applications to classification and regression

    Get PDF
    Shunting inhibition is a powerful computational mechanism that plays an important role in sensory neural information processing systems. It has been extensively used to model some important visual and cognitive functions. It equips neurons with a gain control mechanism that allows them to operate as adaptive non-linear filters. Shunting Inhibitory Artificial Neural Networks (SIANNs) are biologically inspired networks where the basic synaptic computations are based on shunting inhibition. SIANNs were designed to solve difficult machine learning problems by exploiting the inherent non-linearity mediated by shunting inhibition. The aim was to develop powerful, trainable networks, with non-linear decision surfaces, for classification and non-linear regression tasks. This work enhances and extends the original SIANN architecture to a more general form called the Generalised Feedforward Neural Network (GFNN) architecture, which contains as subsets both SIANN and the conventional Multilayer Perceptron (MLP) architectures. The original SIANN structure has the number of shunting neurons in the hidden layers equal to the number of inputs, due to the neuron model that is used having a single direct excitatory input. This was found to be too restrictive, often resulting in inadequately small or inordinately large network structures

    A neuromorphic model of olfactory processing and sparse coding in the Drosophila larva brain

    Full text link
    Animal nervous systems are highly efficient in processing sensory input. The neuromorphic computing paradigm aims at the hardware implementation of neural network computations to support novel solutions for building brain-inspired computing systems. Here, we take inspiration from sensory processing in the nervous system of the fruit fly larva. With its strongly limited computational resources of <200 neurons and <1.000 synapses the larval olfactory pathway employs fundamental computations to transform broadly tuned receptor input at the periphery into an energy efficient sparse code in the central brain. We show how this approach allows us to achieve sparse coding and increased separability of stimulus patterns in a spiking neural network, validated with both software simulation and hardware emulation on mixed-signal real-time neuromorphic hardware. We verify that feedback inhibition is the central motif to support sparseness in the spatial domain, across the neuron population, while the combination of spike frequency adaptation and feedback inhibition determines sparseness in the temporal domain. Our experiments demonstrate that such small, biologically realistic neural networks, efficiently implemented on neuromorphic hardware, can achieve parallel processing and efficient encoding of sensory input at full temporal resolution

    Efficient Training Algorithms for a Class of Shunting Inhibitory Convolutional Neural Networks

    Full text link
    corecore