3,442 research outputs found

    Novel application of stochastic modeling techniques to long-term, high-resolution time-lapse microscopy of cortical axons

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 64-70).Axons exhibit a rich variety of behaviors, such as elongation, turning, branching, and fasciculation, all in service of the complex goal of wiring up the brain. In order to quantify these behaviors, I have developed a system for in vitro imaging of axon growth cones with time-lapse fluorescence microscopy. Image tiles are automatically captured and assembled into a mosaic image of a square millimeter region. GFP-expressing mouse cortical neurons can be imaged once every few minutes for up to weeks if phototoxicity is minimized. Looking at the data, the trajectories of axon growth cones seem to alternate between long, straight segments and sudden turns. I first rigorously test the idea that the straight segments are generated from a biased random walk by analyzing the correlation between growth cone steps in the time and frequency domain. To formalize and test the intuition that sharp turns join straight segments, I fit a hidden Markov model to time series of growth cone velocity vectors.(cont.) The hidden state variable represents the bias direction of a biased random walk, and specifies the mean and variance of a Gaussian distribution from which velocities are drawn. Rotational symmetry is used to constrain the transition probabilities of the hidden variable, as well as the Gaussian distributions for the hidden states. Maximum likelihood estimation of the model parameters shows that the most probable behavior is to remain in the same hidden state. The second most probable behavior is to turn by about 40 degrees. Smaller angle turns are highly improbable, consistent with the idea that the axon makes sudden turns. When the same hidden Markov model was applied to artificially generated meandering trajectories, the transition probabilities were significant only for small angle turns. This novel application of stochastic models to growth cone trajectories provides a quantitative framework for testing interventions (eg. pharmacological, activity-related, etc.) that can impact axonal growth cone movement and turning. For example, manipulations that inhibit actin polymerization increase the frequency and angle of turns made by the growth cone. More generally, axon behaviors may be useful in deducing computational principles for wiring up circuits.by Neville Espi Sanjana.Ph.D

    Learning Combinations of Activation Functions

    Full text link
    In the last decade, an active area of research has been devoted to design novel activation functions that are able to help deep neural networks to converge, obtaining better performance. The training procedure of these architectures usually involves optimization of the weights of their layers only, while non-linearities are generally pre-specified and their (possible) parameters are usually considered as hyper-parameters to be tuned manually. In this paper, we introduce two approaches to automatically learn different combinations of base activation functions (such as the identity function, ReLU, and tanh) during the training phase. We present a thorough comparison of our novel approaches with well-known architectures (such as LeNet-5, AlexNet, and ResNet-56) on three standard datasets (Fashion-MNIST, CIFAR-10, and ILSVRC-2012), showing substantial improvements in the overall performance, such as an increase in the top-1 accuracy for AlexNet on ILSVRC-2012 of 3.01 percentage points.Comment: 6 pages, 3 figures. Published as a conference paper at ICPR 2018. Code: https://bitbucket.org/francux/learning_combinations_of_activation_function

    Information recovery from rank-order encoded images

    Get PDF
    The time to detection of a visual stimulus by the primate eye is recorded at 100 – 150ms. This near instantaneous recognition is in spite of the considerable processing required by the several stages of the visual pathway to recognise and react to a visual scene. How this is achieved is still a matter of speculation. Rank-order codes have been proposed as a means of encoding by the primate eye in the rapid transmission of the initial burst of information from the sensory neurons to the brain. We study the efficiency of rank-order codes in encoding perceptually-important information in an image. VanRullen and Thorpe built a model of the ganglion cell layers of the retina to simulate and study the viability of rank-order as a means of encoding by retinal neurons. We validate their model and quantify the information retrieved from rank-order encoded images in terms of the visually-important information recovered. Towards this goal, we apply the ‘perceptual information preservation algorithm’, proposed by Petrovic and Xydeas after slight modification. We observe a low information recovery due to losses suffered during the rank-order encoding and decoding processes. We propose to minimise these losses to recover maximum information in minimum time from rank-order encoded images. We first maximise information recovery by using the pseudo-inverse of the filter-bank matrix to minimise losses during rankorder decoding. We then apply the biological principle of lateral inhibition to minimise losses during rank-order encoding. In doing so, we propose the Filteroverlap Correction algorithm. To test the perfomance of rank-order codes in a biologically realistic model, we design and simulate a model of the foveal-pit ganglion cells of the retina keeping close to biological parameters. We use this as a rank-order encoder and analyse its performance relative to VanRullen and Thorpe’s retinal model

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    Get PDF
    BACKGROUND:Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. METHODOLOGY:First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. CONCLUSIONS:We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery

    Biologically inspired feature extraction for rotation and scale tolerant pattern analysis

    Get PDF
    Biologically motivated information processing has been an important area of scientific research for decades. The central topic addressed in this dissertation is utilization of lateral inhibition and more generally, linear networks with recurrent connectivity along with complex-log conformal mapping in machine based implementations of information encoding, feature extraction and pattern recognition. The reasoning behind and method for spatially uniform implementation of inhibitory/excitatory network model in the framework of non-uniform log-polar transform is presented. For the space invariant connectivity model characterized by Topelitz-Block-Toeplitz matrix, the overall network response is obtained without matrix inverse operations providing the connection matrix generating function is bound by unity. It was shown that for the network with the inter-neuron connection function expandable in a Fourier series in polar angle, the overall network response is steerable. The decorrelating/whitening characteristics of networks with lateral inhibition are used in order to develop space invariant pre-whitening kernels specialized for specific category of input signals. These filters have extremely small memory footprint and are successfully utilized in order to improve performance of adaptive neural whitening algorithms. Finally, the method for feature extraction based on localized Independent Component Analysis (ICA) transform in log-polar domain and aided by previously developed pre-whitening filters is implemented. Since output codes produced by ICA are very sparse, a small number of non-zero coefficients was sufficient to encode input data and obtain reliable pattern recognition performance

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework

    Neuronal Growth Cone Dynamics are Regulated by a Nitric Oxide-Initiated Second Messenger Pathway.

    Get PDF
    During development, neurons must find their way to and make connections with their appropriate targets. Growth cones are dynamic, motile structures that are integral to the establishment of appropriate connectivity during this wiring process. As growth cones migrate through their environment, they encounter guidance cues that direct their migration to their appropriate synaptic targets. The gaseous messenger nitric oxide (NO), which diffuses across the plasma membrane to act on intracellular targets, is a signaling molecule that affects growth cone motility. However, most studies have examined the effects of NO on growth cone morphology when applied in large concentrations and to entire cells. In addition, the intracellular second messenger cascade activated by NO to bring about these changes in growth cone morphology is not well understood. Therefore, this dissertation addresses the effects that a spatially- and temporally-restricted application of physiological amounts of NO can have on individual growth cone morphology, on the second messenger pathway that is activated by this application of NO, and on the calcium cascades that result and ultimately affect growth cone morphology. Helisoma trivolvis, a pond snail, is an excellent model system for this type of research because it has a well-defined nervous system and cultured neurons form large growth cones. In the present study, local application of NO to Helisoma trivolvis B5 neurons results in an increase in filopodial length, a decrease in filopodial number, and an increase in the intracellular calcium concentration ([Ca2+]i). In B5 neurons, the effects of NO on growth cone behavior and [Ca2+]i are mediated via sGC, protein kinase G, cyclic adenosine diphosphate ribose, and ryanodine receptor-mediated intracellular calcium release. This study demonstrates that neuronal growth cone pathfinding in vitro is affected by a single spatially- and temporally-restricted exposure to NO. Furthermore, NO acts via a second messenger cascade, resulting in a calcium increase that leads to cytoskeletal changes. These results suggest that NO may be a signal that promotes appropriate pathfinding and/or target recognition within the developing nervous system. Taken together, these data indicate that NO may be an important messenger during the development of the nervous system in vivo
    corecore