39,638 research outputs found

    Evaluating performance of neural codes in model neural communication networks

    Get PDF
    Information needs to be appropriately encoded to be reliably transmitted over physical media. Similarly, neurons have their own codes to convey information in the brain. Even though it is well-known that neurons exchange information using a pool of several protocols of spatio-temporal encodings, the suitability of each code and their performance as a function of network parameters and external stimuli is still one of the great mysteries in neuroscience. This paper sheds light on this by modeling small-size networks of chemically and electrically coupled Hindmarsh-Rose spiking neurons. We focus on a class of temporal and firing-rate codes that result from neurons' membrane-potentials and phases, and quantify numerically their performance estimating the Mutual Information Rate, aka the rate of information exchange. Our results suggest that the firing-rate and interspike-intervals codes are more robust to additive Gaussian white noise. In a network of four interconnected neurons and in the absence of such noise, pairs of neurons that have the largest rate of information exchange using the interspike-intervals and firing-rate codes are not adjacent in the network, whereas spike-timings and phase codes (temporal) promote large rate of information exchange for adjacent neurons. If that result would have been possible to extend to larger neural networks, it would suggest that small microcircuits would preferably exchange information using temporal codes (spike-timings and phase codes), whereas on the macroscopic scale, where there would be typically pairs of neurons not directly connected due to the brain's sparsity, firing-rate and interspike-intervals codes would be the most efficient codes

    Evaluating performance of neural codes in model neural communication networks

    Get PDF
    Information needs to be appropriately encoded to be reliably transmitted over a physical media. Similarly, neurons have their own codes to convey information in the brain. Even though it is well-know that neurons exchange information using a pool of several protocols of spatial-temporal encodings, the suitability of each code and their performance as a function of the network parameters and external stimuli is still one of the great mysteries in Neuroscience. This paper sheds light into this problem considering small networks of chemically and electrically coupled Hindmarsh-Rose spiking neurons. We focus on the mathematical fundamental aspects of a class of temporal and firing-rate codes that result from the neurons' action-potentials and phases, and quantify their performance by measuring the Mutual Information Rate, aka the rate of information exchange. A particularly interesting result regards the performance of the codes with respect to the way neurons are connected. We show that pairs of neurons that have the largest rate of information exchange using the interspike interval and firing-rate codes are not adjacent in the network, whereas the spiking-time and phase codes promote large exchange of information rate from adjacent neurons. This result, if possible to extend to larger neural networks, would suggest that small microcircuits of fully connected neurons, also known as cliques, would preferably exchange information using temporal codes (spiking-time and phase codes), whereas on the macroscopic scale, where typically there will be pairs of neurons that are not directly connected due to the brain's sparsity, the most efficient codes would be the firing rate and interspike interval codes, with the latter being closely related to the firing rate code

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Scaling Deep Learning on GPU and Knights Landing clusters

    Full text link
    The speed of deep neural networks training has become a big bottleneck of deep learning research and development. For example, training GoogleNet by ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. To handle large datasets, they need to fetch data from either CPU memory or remote processors. We use both self-hosted Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From an algorithm aspect, current distributed machine learning systems are mainly designed for cloud systems. These methods are asynchronous because of the slow network and high fault-tolerance requirement on cloud systems. We focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. Original EASGD used round-robin method for communication and updating. The communication is ordered by the machine rank ID, which is inefficient on HPC clusters. First, we redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD, and Hogwild EASGD are faster \textcolor{black}{than} their existing counterparts (Async SGD, Async MSGD, and Hogwild SGD, resp.) in all the comparisons. Finally, we design Sync EASGD, which ties for the best performance among all the methods while being deterministic. In addition to the algorithmic improvements, we use some system-algorithm codesign techniques to scale up the algorithms. By reducing the percentage of communication from 87% to 14%, our Sync EASGD achieves 5.3x speedup over original EASGD on the same platform. We get 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation
    • …
    corecore