1,913 research outputs found

    Stable Irregular Dynamics in Complex Neural Networks

    Full text link
    For infinitely large sparse networks of spiking neurons mean field theory shows that a balanced state of highly irregular activity arises under various conditions. Here we analytically investigate the microscopic irregular dynamics in finite networks of arbitrary connectivity, keeping track of all individual spike times. For delayed, purely inhibitory interactions we demonstrate that the irregular dynamics is not chaotic but rather stable and convergent towards periodic orbits. Moreover, every generic periodic orbit of these dynamical systems is stable. These results highlight that chaotic and stable dynamics are equally capable of generating irregular activity.Comment: 10 pages, 2 figure

    Optimising the topology of complex neural networks

    Get PDF
    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution

    Complex Neural Networks for Audio

    Get PDF
    Audio is represented in two mathematically equivalent ways: the real-valued time domain (i.e., waveform) and the complex-valued frequency domain (i.e., spectrum). There are advantages to the frequency-domain representation, e.g., the human auditory system is known to process sound in the frequency-domain. Furthermore, linear time-invariant systems are convolved with sources in the time-domain, whereas they may be factorized in the frequency-domain. Neural networks have become rather useful when applied to audio tasks such as machine listening and audio synthesis, which are related by their dependencies on high quality acoustic models. They ideally encapsulate fine-scale temporal structure, such as that encoded in the phase of frequency-domain audio, yet there are no authoritative deep learning methods for complex audio. This manuscript is dedicated to addressing the shortcoming. Chapter 2 motivates complex networks by their affinity with complex-domain audio, while Chapter 3 contributes methods for building and optimizing complex networks. We show that the naive implementation of Adam optimization is incorrect for complex random variables and show that selection of input and output representation has a significant impact on the performance of a complex network. Experimental results with novel complex neural architectures are provided in the second half of this manuscript. Chapter 4 introduces a complex model for binaural audio source localization. We show that, like humans, the complex model can generalize to different anatomical filters, which is important in the context of machine listening. The complex model\u27s performance is better than that of the real-valued models, as well as real- and complex-valued baselines. Chapter 5 proposes a two-stage method for speech enhancement. In the first stage, a complex-valued stochastic autoencoder projects complex vectors to a discrete space. In the second stage, long-term temporal dependencies are modeled in the discrete space. The autoencoder raises the performance ceiling for state of the art speech enhancement, but the dynamic enhancement model does not outperform other baselines. We discuss areas for improvement and note that the complex Adam optimizer improves training convergence over the naive implementation

    Information transmission in complex neural networks

    Get PDF
    Treballs Finals de Grau de Física, Facultat de Física, Universitat de Barcelona, Curs: 2018, Tutora: Maria Ángeles Serrano MoralComputational neural networks are inspired in the brain structure and designed to mimic its intelligence artificially. In particular, they can be used as a tool to explore information transport and processing among interconnected units forming layers. In this article we study how the structure of the network (number of layers) and the state of the connections (edge weights) affect to the information flow. A computational model, that allows us to simulate neurons' collective behaviour, has been used over different neural networks configurations, analysing the variation of some structure features. Results show a strong dependence between the number of connections and the response of the network. Also, we have found a relationship among the edge weights distribution and the propagation of the information from the input node to the output layer

    MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis

    Get PDF
    Interpretability has emerged as a crucial aspect of machine learning, aimed at providing insights into the working of complex neural networks. However, existing solutions vary vastly based on the nature of the interpretability task, with each use case requiring substantial time and effort. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks ranging from identifying prototypes to explaining image predictions. MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.Comment: Technical Repor
    corecore