489 research outputs found

    Opinion dynamics with emergent collective memory: the impact of a long and heterogeneous news history

    Get PDF
    In modern society people are being exposed to numerous information, with some of them being frequently repeated or more disruptive than others. In this paper we use a model of opinion dynamics to study how this news impact the society. In particular, our study aims to explain how the exposure of the society to certain events deeply change people's perception of the present and future. The evolution of opinions which we consider is influenced both by external information and the pressure of the society. The latter includes imitation, differentiation, homophily and its opposite, xenophobia. The combination of these ingredients gives rise to a collective memory effect, which is triggered by external information. In this paper we focus our attention on how this memory arises when the order of appearance of external news is random. We will show which characteristics a piece of news needs to have in order to be embedded in the society's memory. We will also provide an analytical way to measure how many information a society can remember when an extensive number of news items is presented. Finally we will show that, when a certain piece of news is present in the society's history, even a distorted version of it is sufficient to trigger the memory of the originally stored information.Comment: 30 pages, 6 figure

    The Non-Random Brain: Efficiency, Economy, and Complex Dynamics

    Get PDF
    Modern anatomical tracing and imaging techniques are beginning to reveal the structural anatomy of neural circuits at small and large scales in unprecedented detail. When examined with analytic tools from graph theory and network science, neural connectivity exhibits highly non-random features, including high clustering and short path length, as well as modules and highly central hub nodes. These characteristic topological features of neural connections shape non-random dynamic interactions that occur during spontaneous activity or in response to external stimulation. Disturbances of connectivity and thus of neural dynamics are thought to underlie a number of disease states of the brain, and some evidence suggests that degraded functional performance of brain networks may be the outcome of a process of randomization affecting their nodes and edges. This article provides a survey of the non-random structure of neural connectivity, primarily at the large scale of regions and pathways in the mammalian cerebral cortex. In addition, we will discuss how non-random connections can give rise to differentiated and complex patterns of dynamics and information flow. Finally, we will explore the idea that at least some disorders of the nervous system are associated with increased randomness of neural connections

    An information theoretic learning framework based on Renyi’s α entropy for brain effective connectivity estimation

    Get PDF
    The interactions among neural populations distributed across different brain regions are at the core of cognitive and perceptual processing. Therefore, the ability of studying the flow of information within networks of connected neural assemblies is of fundamental importance to understand such processes. In that regard, brain connectivity measures constitute a valuable tool in neuroscience. They allow assessing functional interactions among brain regions through directed or non-directed statistical dependencies estimated from neural time series. Transfer entropy (TE) is one such measure. It is an effective connectivity estimation approach based on information theory concepts and statistical causality premises. It has gained increasing attention in the literature because it can capture purely nonlinear directed interactions, and is model free. That is to say, it does not require an initial hypothesis about the interactions present in the data. These properties make it an especially convenient tool in exploratory analyses. However, like any information-theoretic quantity, TE is defined in terms of probability distributions that in practice need to be estimated from data. A challenging task, whose outcome can significantly affect the results of TE. Also, it lacks a standard spectral representation, so it cannot reveal the local frequency band characteristics of the interactions it detects.Las interacciones entre poblaciones neuronales distribuidas en diferentes regiones del cerebro son el núcleo del procesamiento cognitivo y perceptivo. Por lo tanto, la capacidad de estudiar el flujo de información dentro de redes de conjuntos neuronales conectados es de fundamental importancia para comprender dichos procesos. En ese sentido, las medidas de conectividad cerebral constituyen una valiosa herramienta en neurociencia. Permiten evaluar interacciones funcionales entre regiones cerebrales a través de dependencias estadísticas dirigidas o no dirigidas estimadas a partir de series de tiempo. La transferencia de entropía (TE) es una de esas medidas. Es un enfoque de estimación de conectividad efectiva basada en conceptos de teoría de la información y premisas de causalidad estadística. Ha ganado una atención cada vez mayor en la literatura porque puede capturar interacciones dirigidas puramente no lineales y no depende de un modelo. Es decir, no requiere de una hipótesis inicial sobre las interacciones presentes en los datos. Estas propiedades la convierten en una herramienta especialmente conveniente en análisis exploratorios. Sin embargo, como cualquier concepto basado en teoría de la información, la TE se define en términos de distribuciones de probabilidad que en la práctica deben estimarse a partir de datos. Una tarea desafiante, cuyo resultado puede afectar significativamente los resultados de la TE. Además, carece de una representación espectral estándar, por lo que no puede revelar las características de banda de frecuencia local de las interacciones que detecta.DoctoradoDoctor(a) en IngenieríaContents List of Figures xi List of Tables xv Notation xvi 1 Preliminaries 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability distribution estimation as an intermediate step in TE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 The lack of a spectral representation for TE . . . . . . . . . . . . 7 1.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Transfer entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Granger causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Information theoretic learning from kernel matrices . . . . . . . . 12 1.4 Literature review on transfer entropy estimation . . . . . . . . . . . . . . 14 1.4.1 Transfer entropy in the frequency domain . . . . . . . . . . . . . . 17 1.5 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.1 General aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.2 Specific aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Outline and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . 24 1.6.2 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . 24 1.6.3 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . 25 1.7 EEG databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Contents ix 1.7.1 Motor imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.7.2 Working memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2 Kernel-based Renyi’s transfer entropy 34 2.1 Kernel-based Renyi’s transfer entropy . . . . . . . . . . . . . . . . . . . . 35 2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2.4 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.1 VAR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Modified linear Kus model . . . . . . . . . . . . . . . . . . . . . . 46 2.3.3 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Kernel-based Renyi’s phase transfer entropy 60 3.1 Kernel-based Renyi’s phase transfer entropy . . . . . . . . . . . . . . . . 61 3.1.1 Phase-based effective connectivity estimation approaches considered in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Neural mass models . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions 84 4.1 Kernel-based Renyi’s phase transfer entropy for the estimation of directed phase-amplitude interactions . . . . . . . . . . . . . . . . . . . . . . . . . 85 x Contents 4.1.1 Transfer entropy for directed phase-amplitude interactions . . . . 85 4.1.2 Cross-frequency directionality . . . . . . . . . . . . . . . . . . . . 85 4.1.3 Phase transfer entropy and directed phase-amplitude interactions 86 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.2.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 88 4.2.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.3 Parameter selection . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.3.1 Simulated phase-amplitude interactions . . . . . . . . . . . . . . . 92 4.3.2 EEG data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Final Remarks 100 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3 Academic products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.1 Journal papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2 Conference papers . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 Conference presentations . . . . . . . . . . . . . . . . . . . . . . . 105 Appendix A Kernel methods and Renyi’s entropy estimation 106 A.1 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 106 A.1.1 Reproducing kernels . . . . . . . . . . . . . . . . . . . . . . . . . 106 A.1.2 Kernel-based learning . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.2 Kernel-based estimation of Renyi’s entropy . . . . . . . . . . . . . . . . . 109 Appendix B Surface Laplacian 113 Appendix C Permutation testing 115 Appendix D Kernel-based relevance analysis 117 Appendix E Cao’s criterion 120 Appendix F Neural mass model equations 122 References 12

    Music as complex emergent behaviour : an approach to interactive music systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 28.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/770) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis suggests a new model of human-machine interaction in the domain of non-idiomatic musical improvisation. Musical results are viewed as emergent phenomena issuing from complex internal systems behaviour in relation to input from a single human performer. We investigate the prospect of rewarding interaction whereby a system modifies itself in coherent though non-trivial ways as a result of exposure to a human interactor. In addition, we explore whether such interactions can be sustained over extended time spans. These objectives translate into four criteria for evaluation; maximisation of human influence, blending of human and machine influence in the creation of machine responses, the maintenance of independent machine motivations in order to support machine autonomy and finally, a combination of global emergent behaviour and variable behaviour in the long run. Our implementation is heavily inspired by ideas and engineering approaches from the discipline of Artificial Life. However, we also address a collection of representative existing systems from the field of interactive composing, some of which are implemented using techniques of conventional Artificial Intelligence. All systems serve as a contextual background and comparative framework helping the assessment of the work reported here. This thesis advocates a networked model incorporating functionality for listening, playing and the synthesis of machine motivations. The latter incorporate dynamic relationships instructing the machine to either integrate with a musical context suggested by the human performer or, in contrast, perform as an individual musical character irrespective of context. Techniques of evolutionary computing are used to optimise system components over time. Evolution proceeds based on an implicit fitness measure; the melodic distance between consecutive musical statements made by human and machine in relation to the currently prevailing machine motivation. A substantial number of systematic experiments reveal complex emergent behaviour inside and between the various systems modules. Music scores document how global systems behaviour is rendered into actual musical output. The concluding chapter offers evidence of how the research criteria were accomplished and proposes recommendations for future research

    Leadership in Complex Organizations

    Get PDF
    This paper asks how complexity theory informs the role of leadership in organizations. Complexity theory is a science of complexly interacting systems; it explores the nature of interaction and adaptation in such systems and how they influence such things as emergence, innovation, and fitness. We argue that complexity theory focuses leadership efforts on behaviors that enable organizational effectiveness, as opposed to determining or guiding effectiveness. Complexity science broadens conceptualizations of leadership from perspectives that are heavily invested in psychology and social psychology (e.g., human relations models) to include processes for managing dynamic systems and interconnectivity. We develop a definition of organizational complexity and apply it to leadership science, discuss strategies for enabling complexity and effectiveness, and delve into the relationship between complexity theory and other currently important leadership theories. The paper concludes with a discussion of possible implications for research strategies in the social sciences

    French Roadmap for complex Systems 2008-2009

    Get PDF
    This second issue of the French Complex Systems Roadmap is the outcome of the Entretiens de Cargese 2008, an interdisciplinary brainstorming session organized over one week in 2008, jointly by RNSC, ISC-PIF and IXXI. It capitalizes on the first roadmap and gathers contributions of more than 70 scientists from major French institutions. The aim of this roadmap is to foster the coordination of the complex systems community on focused topics and questions, as well as to present contributions and challenges in the complex systems sciences and complexity science to the public, political and industrial spheres

    Exact computations in the statistical mechanics of disordered systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 1994.Vita.Includes bibliographical references (leaves 108-113).by Lawrence K. Saul.Ph.D
    • …
    corecore