2,773 research outputs found

    An EU Sky Trust: Distributional Analysis for Hungary

    Get PDF
    We analyze the effects of EU adoption of a Sky Trust (Barnes and Breslow 2003) on the income distribution of Hungary, a lower-middle income EU member.� We use plausible parameters for an EU carbon charge and revenue recycling system, input-output data to track the effect of a carbon charge on commodity prices, and household consumption survey data to examine the effect on expenditure by decile. We find that the carbon-charge revenue collection is nearly flat with respect to income. Combined with Sky Trust revenue recycling, the net effect on income distribution is moderately progressive. For a Sky Trust structure that would significantly increase the likelihood of the EU meeting Stern Review and IPCC greenhouse gas reduction targets, households in the top decile of the Hungarian income distribution would see incomes fall by 859 USD, or 4.4 percent. Households in the lowest decile of the Hungarian income distribution would see household budgets rise by 498 USD, or 11.4 percent. At the median household income, the effect is small but positive.Sky Trust, carbon charge, pollution charge, climate change, greenhouse gas, global warming, incentive-based environmental regulation, green tax, revenue recycling, common-pool resource, energy policy, Hungary, European Union, tradable emission permits, in

    TRENTOOL : an open source toolbox to estimate neural directed interactions with transfer entropy

    Get PDF
    To investigate directed interactions in neural networks we often use Norbert Wiener's famous definition of observational causality. Wiener’s definition states that an improvement of the prediction of the future of a time series X from its own past by the incorporation of information from the past of a second time series Y is seen as an indication of a causal interaction from Y to X. Early implementations of Wiener's principle – such as Granger causality – modelled interacting systems by linear autoregressive processes and the interactions themselves were also assumed to be linear. However, in complex systems – such as the brain – nonlinear behaviour of its parts and nonlinear interactions between them have to be expected. In fact nonlinear power-to-power or phase-to-power interactions between frequencies are reported frequently. To cover all types of non-linear interactions in the brain, and thereby to fully chart the neural networks of interest, it is useful to implement Wiener's principle in a way that is free of a model of the interaction [1]. Indeed, it is possible to reformulate Wiener's principle based on information theoretic quantities to obtain the desired model-freeness. The resulting measure was originally formulated by Schreiber [2] and termed transfer entropy (TE). Shortly after its publication transfer entropy found applications to neurophysiological data. With the introduction of new, data efficient estimators (e.g. [3]) TE has experienced a rapid surge of interest (e.g. [4]). Applications of TE in neuroscience range from recordings in cultured neuronal populations to functional magnetic resonanace imaging (fMRI) signals. Despite widespread interest in TE, no publicly available toolbox exists that guides the user through the difficulties of this powerful technique. TRENTOOL (the TRansfer ENtropy TOOLbox) fills this gap for the neurosciences by bundling data efficient estimation algorithms with the necessary parameter estimation routines and nonparametric statistical testing procedures for comparison to surrogate data or between experimental conditions. TRENTOOL is an open source MATLAB toolbox based on the Fieldtrip data format. ..

    Learning more by sampling less : subsampling effects are model specific

    Get PDF
    When studying real world complex networks, one rarely has full access to all their components. As an example, the central nervous system of the human consists of 1011 neurons which are each connected to thousands of other neurons. Of these 100 billion neurons, at most a few hundred can be recorded in parallel. Thus observations are hampered by immense subsampling. While subsampling does not affect the observables of single neuron activity, it can heavily distort observables which characterize interactions between pairs or groups of neurons. Without a precise understanding how subsampling affects these observables, inference on neural network dynamics from subsampled neural data remains limited. We systematically studied subsampling effects in three self-organized critical (SOC) models, since this class of models can reproduce the spatio-temporal activity of spontaneous activity observed in vivo. The models differed in their topology and in their precise interaction rules. The first model consisted of locally connected integrate- and fire units, thereby resembling cortical activity propagation mechanisms. The second model had the same interaction rules but random connectivity. The third model had local connectivity but different activity propagation rules. As a measure of network dynamics, we characterized the spatio-temporal waves of activity, called avalanches. Avalanches are characteristic for SOC models and neural tissue. Avalanche measures A (e.g. size, duration, shape) were calculated for the fully sampled and the subsampled models. To mimic subsampling in the models, we considered the activity of a subset of units only, discarding the activity of all the other units. Under subsampling the avalanche measures A depended on three main factors: First, A depended on the interaction rules of the model and its topology, thus each model showed its own characteristic subsampling effects on A. Second, A depended on the number of sampled sites n. With small and intermediate n, the true A¬ could not be recovered in any of the models. Third, A depended on the distance d between sampled sites. With small d, A was overestimated, while with large d, A was underestimated. Since under subsampling, the observables depended on the model's topology and interaction mechanisms, we propose that systematic subsampling can be exploited to compare models with neural data: When changing the number and the distance between electrodes in neural tissue and sampled units in a model analogously, the observables in a correct model should behave the same as in the neural tissue. Thereby, incorrect models can easily be discarded. Thus, systematic subsampling offers a promising and unique approach to model selection, even if brain activity was far from being fully sampled

    Short-time asymptotics of the regularizing effect for semigroups generated by quadratic operators

    Full text link
    We study accretive quadratic operators with zero singular spaces. These degenerate non-selfadjoint differential operators are known to be hypoelliptic and to generate contraction semigroups which are smoothing in the Schwartz space for any positive time. In this work, we study the short-time asymptotics of the regularizing effect induced by these semigroups. We show that these short-time asymptotics of the regularizing effect depend on the directions of the phase space, and that this dependence can be nicely understood through the structure of the singular space. As a byproduct of these results, we derive sharp subelliptic estimates for accretive quadratic operators with zero singular spaces pointing out that the loss of derivatives with respect to the elliptic case also depends on the phase space directions according to the structure of the singular space. Some applications of these results are then given to the study of degenerate hypoelliptic Ornstein-Uhlenbeck operators and degenerate hypoelliptic Fokker-Planck operators.Comment: 46 pages. arXiv admin note: text overlap with arXiv:1411.622

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Neuronal avalanches differ from wakefulness to deep sleep - evidence from intracranial depth recordings in humans

    Get PDF
    Neuronal activity differs between wakefulness and sleep states. In contrast, an attractor state, called self-organized critical (SOC), was proposed to govern brain dynamics because it allows for optimal information coding. But is the human brain SOC for each vigilance state despite the variations in neuronal dynamics? We characterized neuronal avalanches – spatiotemporal waves of enhanced activity - from dense intracranial depth recordings in humans. We showed that avalanche distributions closely follow a power law – the hallmark feature of SOC - for each vigilance state. However, avalanches clearly differ with vigilance states: slow wave sleep (SWS) shows large avalanches, wakefulness intermediate, and rapid eye movement (REM) sleep small ones. Our SOC model, together with the data, suggested first that the differences are mediated by global but tiny changes in synaptic strength, and second, that the changes with vigilance states reflect small deviations from criticality to the subcritical regime, implying that the human brain does not operate at criticality proper but close to SOC. Independent of criticality, the analysis confirms that SWS shows increased correlations between cortical areas, and reveals that REM sleep shows more fragmented cortical dynamics

    Neuronal avalanches change from wakefulness to deep sleep - a study of intracranial depth recordings in humans

    Get PDF
    Neuronal dynamics differs between wakefulness and sleep stages, so does the cognitive state. In contrast, a single attractor state, called self-organized critical (SOC), has been proposed to govern human brain dynamics for its optimal information coding and processing capabilities. Here we address two open questions: First, does the human brain always operate in this computationally optimal state, even during deep sleep? Second, previous evidence for SOC was based on activity within single brain areas, however, the interaction between brain areas may be organized differently. Here we asked whether the interaction between brain areas is SOC. ..

    Dynamic Adaptive Computation: Tuning network states to task requirements

    Get PDF
    Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation", presents a central organization principle of cortical networks and discuss first experimental evidence.Comment: 6 pages + references, 2 figure

    Inferring change points in the COVID-19 spreading reveals the effectiveness of interventions

    Full text link
    As COVID-19 is rapidly spreading across the globe, short-term modeling forecasts provide time-critical information for decisions on containment and mitigation strategies. A main challenge for short-term forecasts is the assessment of key epidemiological parameters and how they change when first interventions show an effect. By combining an established epidemiological model with Bayesian inference, we analyze the time dependence of the effective growth rate of new infections. Focusing on the COVID-19 spread in Germany, we detect change points in the effective growth rate that correlate well with the times of publicly announced interventions. Thereby, we can quantify the effect of interventions, and we can incorporate the corresponding change points into forecasts of future scenarios and case numbers. Our code is freely available and can be readily adapted to any country or region.Comment: 23 pages, 11 figures. Our code is freely available and can be readily adapted to any country or region ( https://github.com/Priesemann-Group/covid19_inference_forecast/
    corecore