1,888 research outputs found

    Correlative Information Maximization Based Biologically Plausible Neural Networks for Correlated Source Separation

    Full text link
    The brain effortlessly extracts latent causes of stimuli, but how it does this at the network level remains unknown. Most prior attempts at this problem proposed neural networks that implement independent component analysis which works under the limitation that latent causes are mutually independent. Here, we relax this limitation and propose a biologically plausible neural network that extracts correlated latent sources by exploiting information about their domains. To derive this network, we choose maximum correlative information transfer from inputs to outputs as the separation objective under the constraint that the outputs are restricted to their presumed sets. The online formulation of this optimization problem naturally leads to neural networks with local learning rules. Our framework incorporates infinitely many source domain choices and flexibly models complex latent structures. Choices of simplex or polytopic source domains result in networks with piecewise-linear activation functions. We provide numerical examples to demonstrate the superior correlated source separation capability for both synthetic and natural sources.Comment: Preprint, 32 page

    Perceptually motivated blind source separation of convolutive audio mixtures

    Get PDF

    A biologically plausible system for detecting saliency in video

    Get PDF
    Neuroscientists and cognitive scientists credit the dorsal and ventral pathways for the capability of detecting both still salient and motion salient objects. In this work, a framework is developed to explore potential models of still and motion saliency and is an extension of the original VENUS system. The early visual pathway is modeled by using Independent Component Analysis to learn a set of Gabor-like receptive fields similar to those found in the mammalian visual pathway. These spatial receptive fields form a set of 2D basis feature matrices, which are used to decompose complex visual stimuli into their spatial components. A still saliency map is formed by combining the outputs of convoluting the learned spatial receptive fields with the input stimuli. The dorsal pathway is primarily focused on motion-based information. In this framework, the model uses simple motion segmentation and tracking algorithms to create a statistical model of the motion and color-related information in video streams. A key feature of the human visual system is the ability to detect novelty. This framework uses a set of Gaussian distributions to model color and motion. When a unique event is detected, Gaussian distributions are created and the event is declared novel. The next time a similar event is detected the framework is able to determine that the event is not novel based on the previously created distributions. A forgetting term is also included that allows events that have not been detected for a long period of time to be forgotten

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Biologically inspired feature extraction for rotation and scale tolerant pattern analysis

    Get PDF
    Biologically motivated information processing has been an important area of scientific research for decades. The central topic addressed in this dissertation is utilization of lateral inhibition and more generally, linear networks with recurrent connectivity along with complex-log conformal mapping in machine based implementations of information encoding, feature extraction and pattern recognition. The reasoning behind and method for spatially uniform implementation of inhibitory/excitatory network model in the framework of non-uniform log-polar transform is presented. For the space invariant connectivity model characterized by Topelitz-Block-Toeplitz matrix, the overall network response is obtained without matrix inverse operations providing the connection matrix generating function is bound by unity. It was shown that for the network with the inter-neuron connection function expandable in a Fourier series in polar angle, the overall network response is steerable. The decorrelating/whitening characteristics of networks with lateral inhibition are used in order to develop space invariant pre-whitening kernels specialized for specific category of input signals. These filters have extremely small memory footprint and are successfully utilized in order to improve performance of adaptive neural whitening algorithms. Finally, the method for feature extraction based on localized Independent Component Analysis (ICA) transform in log-polar domain and aided by previously developed pre-whitening filters is implemented. Since output codes produced by ICA are very sparse, a small number of non-zero coefficients was sufficient to encode input data and obtain reliable pattern recognition performance

    The neural marketplace

    Get PDF
    The `retroaxonal hypothesis' (Harris, 2008) posits a role for slow retrograde signalling in learning. It is based on the intuition that cells with strong output synapses tend to be those that encode useful information; and that cells which encode useful information should not modify their input synapses too readily. The hypothesis has two parts: rst, that the stronger a cell's output synapses, the less likely it is to change its input synapses; and second, that a cell is more likely to revert changes to its input synapses when the changes are followed by weakening of its output synapses. It is motivated in part by analogy between a neural network and a market economy, viewing neurons as `entrepreneurs' who `sell' spike trains to each other. In this view, the slow retrograde signals which tell a neuron that it has strong output synapses are `money' and imply that what it produces is useful. This thesis constructs a mathematical model of learning, which validates the intuition of the retroaxonal hypothesis. In this model, we show that neurons can estimate their usefulness, or `worth', from the magnitude of their output weights. We also show that by making each cell's input synapses more or less plastic according to its worth, the performance of a network can be improved.Open Acces

    Vegetal diamine oxidase alleviates histamine-induced contraction of colonic muscles

    Get PDF
    Excess of histamine in gut lumen generates a pronounced gastrointestinal discomfort, which may include diarrhea and peristalsis dysfunctions. Deleterious effects of histamine can be alleviated with antihistamine drugs targeting histamine receptors. However, many antihistamine agents come with various undesirable side effects. Vegetal diamine oxidase (vDAO) might be a relevant alternative owing to its histaminase activity. Mammalian intestinal mucosa contains an endogenous DAO, yet possessing lower activity compared to that of vDAO preparation. Moreover, in several pathological conditions such as inflammatory bowel disease and irritable bowel syndrome, this endogenous DAO enzyme can be lost or inactivated. Here, we tested the therapeutic potential of vDAO by focusing on the well-known effect of histamine on gut motility. Using ex vivo and in vitro assays, we found that vDAO is more potent than commercial anti-histamine drugs at inhibiting histamine-induced contraction of murine distal colon muscles. We also identified pyridoxal 5′-phosphate (the biologically active form of vitamin B6) as an effective enhancer of vDAO antispasmodic activity. Furthermore, we discovered that rectally administered vDAO can be retained on gut mucosa and remain active. These observations make administration of vDAO in the gut lumen a valid alternative treatment for histamine-induced intestinal dysfunctions
    corecore