1,213 research outputs found

    How specific is synchronous neuronal firing? : Poster presentation

    Get PDF
    Background Synchronous neuronal firing has been discussed as a potential neuronal code. For testing first, if synchronous firing exists, second if it is modulated by the behaviour, and third if it is not by chance, a large set of tools has been developed. However, to test whether synchronous neuronal firing is really involved in information processing one needs a direct comparison of the amount of synchronous firing for different factors like experimental or behavioural conditions. To this end we present an extended version of a previously published method NeuroXidence [1], which tests, based on a bi- and multivariate test design, whether the amount of synchronous firing above the chance level is different for different factors

    Detection of task-related synchronous firing patterns

    Get PDF
    Poster presentation: Background To test the importance of synchronous neuronal firing for information processing in the brain, one has to investigate if synchronous firing strength is correlated to the experimental subjects. This requires a tool that can compare the strength of the synchronous firing across different conditions, while at the same time it should correct for other features of neuronal firing such as spike rate modulation or the auto-structure of the spike trains that might co-occur with synchronous firing. Here we present the bi- and multivariate extension of previously developed method NeuroXidence [1,2], which allows for comparing the amount of synchronous firing between different conditions. ..

    Auto-structure of spike trains matters for testing on synchronous activity

    Get PDF
    Poster presentation: Coordinated neuronal activity across many neurons, i.e. synchronous or spatiotemporal pattern, had been believed to be a major component of neuronal activity. However, the discussion if coordinated activity really exists remained heated and controversial. A major uncertainty was that many analysis approaches either ignored the auto-structure of the spiking activity, assumed a very simplified model (poissonian firing), or changed the auto-structure by spike jittering. We studied whether a statistical inference that tests whether coordinated activity is occurring beyond chance can be made false if one ignores or changes the real auto-structure of recorded data. To this end, we investigated the distribution of coincident spikes in mutually independent spike-trains modeled as renewal processes. We considered Gamma processes with different shape parameters as well as renewal processes in which the ISI distribution is log-normal. For Gamma processes of integer order, we calculated the mean number of coincident spikes, as well as the Fano factor of the coincidences, analytically. We determined how these measures depend on the bin width and also investigated how they depend on the firing rate, and on rate difference between the neurons. We used Monte-Carlo simulations to estimate the whole distribution for these parameters and also for other values of gamma. Moreover, we considered the effect of dithering for both of these processes and saw that while dithering does not change the average number of coincidences, it does change the shape of the coincidence distribution. Our major findings are: 1) the width of the coincidence count distribution depends very critically and in a non-trivial way on the detailed properties of the inter-spike interval distribution, 2) the dependencies of the Fano factor on the coefficient of variation of the ISI distribution are complex and mostly non-monotonic. Moreover, the Fano factor depends on the very detailed properties of the individual point processes, and cannot be predicted by the CV alone. Hence, given a recorded data set, the estimated value of CV of the ISI distribution is not sufficient to predict the Fano factor of the coincidence count distribution, and 3) spike jittering, even if it is as small as a fraction of the expected ISI, can falsify the inference on coordinated firing. In most of the tested cases and especially for complex synchronous and spatiotemporal pattern across many neurons, spike jittering increased the likelihood of false positive finding very strongly. Last, we discuss a procedure [1] that considers the complete auto-structure of each individual spike-train for testing whether synchrony firing occurs at chance and therefore overcomes the danger of an increased level of false positives

    Cortical Spike Synchrony as a Measure of Input Familiarity

    Get PDF
    J.G.O. was supported by the Ministerio de Economia y Competividad and FEDER (Spain, project FIS2015-66503-C3-1-P) and the ICREA Academia programme. E.U. acknowledges support from the Scottish Universities Life Sciences Alliance (SULSA) and HPC-Europa2.Peer reviewedPostprin

    Using transfer entropy to measure the patterns of information flow though cortex : application to MEG recordings from a visual Simon task

    Get PDF
    Poster presentation: Functional connectivity of the brain describes the network of correlated activities of different brain areas. However, correlation does not imply causality and most synchronization measures do not distinguish causal and non-causal interactions among remote brain areas, i.e. determine the effective connectivity [1]. Identification of causal interactions in brain networks is fundamental to understanding the processing of information. Attempts at unveiling signs of functional or effective connectivity from non-invasive Magneto-/Electroencephalographic (M/EEG) recordings at the sensor level are hampered by volume conduction leading to correlated sensor signals without the presence of effective connectivity. Here, we make use of the transfer entropy (TE) concept to establish effective connectivity. The formalism of TE has been proposed as a rigorous quantification of the information flow among systems in interaction and is a natural generalization of mutual information [2]. In contrast to Granger causality, TE is a non-linear measure and not influenced by volume conduction. ..

    Using Global Goals to Drive Progress in Mountain West

    Full text link
    As part of the Brookings Scholar Lecture Series, Brookings Mountain West invites you to a lecture titled Using Global Goals to Drive Progress in the Mountain West by Senior Fellow in Global Economy and Development, Anthony Pipa. This lecture will examine the state of inclusive economic progress and environmental sustainability in the Mountain West, as measured against the globally adopted Sustainable Development Goals (SDGs). What cities and states are making progress? Who is being left behind? How can outcome-based goals help address the region’s challenges and spur economic progress

    SORN: A Self-Organizing Recurrent Neural Network

    Get PDF
    Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success

    Weathering the Storm: The Role of Local Nonprofits in the Hurrican Katrina Relief Effort

    Get PDF
    In the weeks and months following Hurricane Katrina, national attention was focused on the monumental task of providing relief to the hundreds of thousands of people injured and displaced by the disaster. The media provided extensive coverage, both positive and often negative, of the relief work of FEMA and the American Red Cross. Largely overlooked, however, was the important contribution of the many local nonprofit organizations and religious congregations that were at the heart of the disaster response effort.To correct this omission, the Nonprofit Sector and Philanthropy Program of the Aspen Institute commissioned Tony Pipa, a former foundation executive already working on the ground in the affected area, to interview key stakeholders and analyze the overall nonprofit and philanthropic response to Hurricane Katrina.The resulting report finds that small and medium-sized nonprofits and faith-based groups are vital to our nation's disaster response infrastructure. They know the people who need help and are often the only organizations capable of reaching them. But just as national attention was focused elsewhere and largely overlooked the work of local nonprofits, so did the major, national relief agencies fail to appreciate the contribution of community-based organizations. Both FEMA and the American Red Cross offered limited support and coordination to small, local nonprofit agencies.To remedy this situation, Tony Pipa offers recommendations designed to increase the coordination of community organizations and help funnel more funds to the local level. Some of the needed changes, like increased communication between FEMA and the Red Cross, are already being implemented. Others, like requiring the Red Cross to contribute five percent of its donations to local agencies, will be controversial. Our goal is to initiate a conversation among relevant stakeholders about how best to integrate local community organizations into our country's disaster response syste

    Beyond relationalism in quantum theory

    Full text link
    An influential tradition in the foundations and philosophy of quantum theory (QT) claims that if we reject supplementing QT with hidden variables and consider that unitary QT is correct and universal, we should adopt a relationalist approach to QT. This tradition involves a series of approaches that relativize measurement outcomes to, for example, worlds, systems, agents, or reference frames. It includes Everett's Relative-State formulation of QT, the Many-worlds Interpretation, Relational Quantum Mechanics, QBism, Healey's Pragmatism, and Diek's perspectival modal interpretation. These approaches have potential costs that may make them unattractive. By presenting a plausible alternative approach called Endeterminacy-based quantum theory (EBQT), I argue that adopting relationalism is unnecessary in order to have a non-hidden variable unitary universal quantum theory. EBQT circumvents relationalism by constructing an account of determinate and indeterminate properties that is neither relational nor perspectival. Moreover, it shows that relationalist approaches potentially add unnecessary complications and that a less costly alternative may exist. In this account, the first systems with determinate value properties arose at the early stages of the universe through certain interactions, and they further gave rise to other systems with determinate value properties. Determinate value properties persist over time because of certain structured interactions between systems. In situations where a relationalist is pressed to assume that measured outcomes are relativized, for EBQT there are no determinate outcomes. There are rather systems with absolutely indeterminate properties.Comment: Minor edits for clarificatory purposes, mainly on pages 15-2
    corecore