401 research outputs found

    Staged decline of neuronal function in vivo in an animal model of Alzheimer's disease

    Get PDF
    The accumulation of amyloid-β in the brain is an essential feature of Alzheimer's disease. However, the impact of amyloid-β-accumulation on neuronal dysfunction on the single cell level in vivo is poorly understood. Here we investigate the progression of amyloid-β load in relation to neuronal dysfunction in the visual system of the APP23×PS45 mouse model of Alzheimer's disease. Using in vivo two-photon calcium imaging in the visual cortex, we demonstrate that a progressive deterioration of neuronal tuning for the orientation of visual stimuli occurs in parallel with the age-dependent increase of the amyloid-β load. Importantly, we find this deterioration only in neurons that are hyperactive during spontaneous activity. This impairment of visual cortical circuit function also correlates with pronounced deficits in visual-pattern discrimination. Together, our results identify distinct stages of decline in sensory cortical performance in vivo as a function of the increased amyloid-β-load

    Causal connectivity of evolved neural networks during behavior

    Get PDF
    To show how causal interactions in neural dynamics are modulated by behavior, it is valuable to analyze these interactions without perturbing or lesioning the neural mechanism. This paper proposes a method, based on a graph-theoretic extension of vector autoregressive modeling and 'Granger causality,' for characterizing causal interactions generated within intact neural mechanisms. This method, called 'causal connectivity analysis' is illustrated via model neural networks optimized for controlling target fixation in a simulated head-eye system, in which the structure of the environment can be experimentally varied. Causal connectivity analysis of this model yields novel insights into neural mechanisms underlying sensorimotor coordination. In contrast to networks supporting comparatively simple behavior, networks supporting rich adaptive behavior show a higher density of causal interactions, as well as a stronger causal flow from sensory inputs to motor outputs. They also show different arrangements of 'causal sources' and 'causal sinks': nodes that differentially affect, or are affected by, the remainder of the network. Finally, analysis of causal connectivity can predict the functional consequences of network lesions. These results suggest that causal connectivity analysis may have useful applications in the analysis of neural dynamics

    Texture Synthesis Through Convolutional Neural Networks and Spectrum Constraints

    Full text link
    This paper presents a significant improvement for the synthesis of texture images using convolutional neural networks (CNNs), making use of constraints on the Fourier spectrum of the results. More precisely, the texture synthesis is regarded as a constrained optimization problem, with constraints conditioning both the Fourier spectrum and statistical features learned by CNNs. In contrast with existing methods, the presented method inherits from previous CNN approaches the ability to depict local structures and fine scale details, and at the same time yields coherent large scale structures, even in the case of quasi-periodic images. This is done at no extra computational cost. Synthesis experiments on various images show a clear improvement compared to a recent state-of-the art method relying on CNN constraints only

    On Using Backpropagation for Speech Texture Generation and Voice Conversion

    Full text link
    Inspired by recent work on neural network image generation which rely on backpropagation towards the network inputs, we present a proof-of-concept system for speech texture synthesis and voice conversion based on two mechanisms: approximate inversion of the representation learned by a speech recognition neural network, and on matching statistics of neuron activations between different source and target utterances. Similar to image texture synthesis and neural style transfer, the system works by optimizing a cost function with respect to the input waveform samples. To this end we use a differentiable mel-filterbank feature extraction pipeline and train a convolutional CTC speech recognition network. Our system is able to extract speaker characteristics from very limited amounts of target speaker data, as little as a few seconds, and can be used to generate realistic speech babble or reconstruct an utterance in a different voice.Comment: Accepted to ICASSP 201
    • …
    corecore