6 research outputs found

    A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales

    Full text link
    The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.Comment: 11 pages, 6 figures, research articl

    Simulations of viscous shape relaxation in shuffled foam clusters

    Full text link
    We simulate the shape relaxation of foam clusters and compare them with the time exponential expected for Newtonian fluid. Using two-dimensional Potts Model simulations, we artificially create holes in a foam cluster and shuffle it by applying shear strain cycles. We reproduce the experimentally observed time exponential relaxation of cavity shapes in the foam as a function of the number of strain steps. The cavity rounding up results from local rearrangement of bubbles, due to the conjunction of both a large applied strain and local bubble wall fluctuations

    How the brain represents language and answers questions? Using an AI system to understand the underlying neurobiological mechanisms

    Get PDF
    To understand the computations that underlie high-level cognitive processes we propose a framework of mechanisms that could in principle implement START, an AI program that answers questions using natural language. START organizes a sentence into a series of triplets, each containing three elements (subject, verb, object). We propose that the brain similarly defines triplets and then chunks the three elements into a spatial pattern. A complete sentence can be represented using up to 7 triplets in a working memory buffer organized by theta and gamma oscillations. This buffer can transfer information into long-term memory networks where a second chunking operation converts the serial triplets into a single spatial pattern in a network, with each triplet (with corresponding elements) represented in specialized subregions. The triplets that define a sentence become synaptically linked, thereby encoding the sentence in synaptic weights. When a question is posed, there is a search for the closest stored memory (having the greatest number of shared triplets). We have devised a search process that does not require that the question and the stored memory have the same number of triplets or have triplets in the same order. Once the most similar memory is recalled and undergoes 2-level dechunking, the sought for information can be obtained by element-by-element comparison of the key triplet in the question to the corresponding triplet in the retrieved memory. This search may require a reordering to align corresponding triplets, the use of pointers that link different triplets, or the use of semantic memory. Our framework uses 12 network processes; existing models can implement many of these, but in other cases we can only suggest neural implementations. Overall, our scheme provides the first view of how language-based question answering could be implemented by the brain

    Get out but don’t fall down: verb-particle constructions in child language

    No full text
    Much has been discussed about the challenges posed by Multiword Expressions (MWEs) given their idiosyncratic, flexible and heterogeneous nature. Nonetheless, children successfully learn to use them and eventually acquire a number of Multiword Expressions comparable to that of simplex words. In this paper we report a wide-coverage investigation of a particular type of MWE: verb-particle constructions (VPCs) in English and their usage in child-produced and child-directed sentences. Given their potentially higher complexity in relation to simplex verbs, we examine whether they appear less prominently in child-produced than in childdirected speech, and whether the VPCs that children produce are more conservative than adults, displaying proportionally reduced lexical repertoire of VPCs or of verbs in these combinations. The results obtained indicate that regardless of any additional complexity VPCs feature widely in children data following closely adult usage. Studies like these can inform the development of computational models for language acquisition.
    corecore