5,706 research outputs found

    Finite-State Channel Models for Signal Transduction in Neural Systems

    Full text link
    Information theory provides powerful tools for understanding communication systems. This analysis can be applied to intercellular signal transduction, which is a means of chemical communication among cells and microbes. We discuss how to apply information-theoretic analysis to ligand-receptor systems, which form the signal carrier and receiver in intercellular signal transduction channels. We also discuss the applications of these results to neuroscience.Comment: Accepted for publication in 2016 IEEE International Conference on Acoustics, Speech, and Signal Processing, Shanghai, Chin

    Capacity of a Simple Intercellular Signal Transduction Channel

    Full text link
    We model the ligand-receptor molecular communication channel with a discrete-time Markov model, and show how to obtain the capacity of this channel. We show that the capacity-achieving input distribution is iid; further, unusually for a channel with memory, we show that feedback does not increase the capacity of this channel.Comment: 5 pages, 1 figure. To appear in the 2013 IEEE International Symposium on Information Theor

    Fold-Hopf Bursting in a Model for Calcium Signal Transduction

    Full text link
    We study a recent model for calcium signal transduction. This model displays spiking, bursting and chaotic oscillations in accordance with experimental results. We calculate bifurcation diagrams and study the bursting behaviour in detail. This behaviour is classified according to the dynamics of separated slow and fast subsystems. It is shown to be of the Fold-Hopf type, a type which was previously only described in the context of neuronal systems, but not in the context of signal transduction in the cell.Comment: 13 pages, 5 figure

    Sequence Transduction with Recurrent Neural Networks

    Full text link
    Many machine learning tasks can be expressed as the transformation---or \emph{transduction}---of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since \emph{finding} the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for phoneme recognition are provided on the TIMIT speech corpus.Comment: First published in the International Conference of Machine Learning (ICML) 2012 Workshop on Representation Learnin

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Somatosensory neurons integrate the geometry of skin deformation and mechanotransduction channels to shape touch sensing.

    Get PDF
    Touch sensation hinges on force transfer across the skin and activation of mechanosensitive ion channels along the somatosensory neurons that invade the skin. This skin-nerve sensory system demands a quantitative model that spans the application of mechanical loads to channel activation. Unlike prior models of the dynamic responses of touch receptor neurons in Caenorhabditis elegans (Eastwood et al., 2015), which substituted a single effective channel for the ensemble along the TRNs, this study integrates body mechanics and the spatial recruitment of the various channels. We demonstrate that this model captures mechanical properties of the worm's body and accurately reproduces neural responses to simple stimuli. It also captures responses to complex stimuli featuring non-trivial spatial patterns, like extended or multiple contacts that could not be addressed otherwise. We illustrate the importance of these effects with new experiments revealing that skin-neuron composites respond to pre-indentation with increased currents rather than adapting to persistent stimulation
    • …
    corecore