2,831 research outputs found

    Transmission of Information in Active Networks

    Full text link
    Shannon's Capacity Theorem is the main concept behind the Theory of Communication. It says that if the amount of information contained in a signal is smaller than the channel capacity of a physical media of communication, it can be transmitted with arbitrarily small probability of error. This theorem is usually applicable to ideal channels of communication in which the information to be transmitted does not alter the passive characteristics of the channel that basically tries to reproduce the source of information. For an {\it active channel}, a network formed by elements that are dynamical systems (such as neurons, chaotic or periodic oscillators), it is unclear if such theorem is applicable, once an active channel can adapt to the input of a signal, altering its capacity. To shed light into this matter, we show, among other results, how to calculate the information capacity of an active channel of communication. Then, we show that the {\it channel capacity} depends on whether the active channel is self-excitable or not and that, contrary to a current belief, desynchronization can provide an environment in which large amounts of information can be transmitted in a channel that is self-excitable. An interesting case of a self-excitable active channel is a network of electrically connected Hindmarsh-Rose chaotic neurons.Comment: 15 pages, 5 figures. submitted for publication. to appear in Phys. Rev.

    Optimal network topologies for information transmission in active networks

    Get PDF
    This work clarifies the relation between network circuit (topology) and behavior (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how to determine a network topology that is optimal for information transmission. By optimal, we mean that the network is able to transmit a large amount of information, it possesses a large number of communication channels, and it is robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.Comment: 20 pages, 12 figure

    Chaotic Observer-based Synchronization Under Information Constraints

    Full text link
    Limit possibilities of observer-based synchronization systems under information constraints (limited information capacity of the coupling channel) are evaluated. We give theoretical analysis for multi-dimensional drive-response systems represented in the Lurie form (linear part plus nonlinearity depending only on measurable outputs). It is shown that the upper bound of the limit synchronization error (LSE) is proportional to the upper bound of the transmission error. As a consequence, the upper and lower bounds of LSE are proportional to the maximum rate of the coupling signal and inversely proportional to the information transmission rate (channel capacity). Optimality of the binary coding for coders with one-step memory is established. The results are applied to synchronization of two chaotic Chua systems coupled via a channel with limited capacity.Comment: 7 pages, 6 figures, 27 reference

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Local information transfer as a spatiotemporal filter for complex systems

    Full text link
    We present a measure of local information transfer, derived from an existing averaged information-theoretical measure, namely transfer entropy. Local transfer entropy is used to produce profiles of the information transfer into each spatiotemporal point in a complex system. These spatiotemporal profiles are useful not only as an analytical tool, but also allow explicit investigation of different parameter settings and forms of the transfer entropy metric itself. As an example, local transfer entropy is applied to cellular automata, where it is demonstrated to be a novel method of filtering for coherent structure. More importantly, local transfer entropy provides the first quantitative evidence for the long-held conjecture that the emergent traveling coherent structures known as particles (both gliders and domain walls, which have analogues in many physical processes) are the dominant information transfer agents in cellular automata.Comment: 12 page

    Personal area technologies for internetworked services

    Get PDF

    Information processing in biological complex systems: a view to bacterial and neural complexity

    Get PDF
    This thesis is a study of information processing of biological complex systems seen from the perspective of dynamical complexity (the degree of statistical independence of a system as a whole with respect to its components due to its causal structure). In particular, we investigate the influence of signaling functions in cell-to-cell communication in bacterial and neural systems. For each case, we determine the spatial and causal dependencies in the system dynamics from an information-theoretic point of view and we relate it with their physiological capabilities. The main research content is presented into three main chapters. First, we study a previous theoretical work on synchronization, multi-stability, and clustering of a population of coupled synthetic genetic oscillators via quorum sensing. We provide an extensive numerical analysis of the spatio-temporal interactions, and determine conditions in which the causal structure of the system leads to high dynamical complexity in terms of associated metrics. Our results indicate that this complexity is maximally receptive at transitions between dynamical regimes, and maximized for transient multi-cluster oscillations associated with chaotic behaviour. Next, we introduce a model of a neuron-astrocyte network with bidirectional coupling using glutamate-induced calcium signaling. This study is focused on the impact of the astrocyte-mediated potentiation on synaptic transmission. Our findings suggest that the information generated by the joint activity of the population of neurons is irreducible to its independent contribution due to the role of astrocytes. We relate these results with the shared information modulated by the spike synchronization imposed by the bidirectional feedback between neurons and astrocytes. It is shown that the dynamical complexity is maximized when there is a balance between the spike correlation and spontaneous spiking activity. Finally, the previous observations on neuron-glial signaling are extended to a large-scale system with community structure. Here we use a multi-scale approach to account for spatiotemporal features of astrocytic signaling coupled with clusters of neurons. We investigate the interplay of astrocytes and spiking-time-dependent-plasticity at local and global scales in the emergence of complexity and neuronal synchronization. We demonstrate the utility of astrocytes and learning in improving the encoding of external stimuli as well as its ability to favour the integration of information at synaptic timescales to exhibit a high intrinsic causal structure at the system level. Our proposed approach and observations point to potential effects of the astrocytes for sustaining more complex information processing in the neural circuitry

    Mammalian Brain As a Network of Networks

    Get PDF
    Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD
    corecore