36 research outputs found

    Information content of note transitions in the music of J. S. Bach

    Full text link
    Music has a complex structure that expresses emotion and conveys information. Humans process that information through imperfect cognitive instruments that produce a gestalt, smeared version of reality. How can we quantify the information contained in a piece of music? Further, what is the information inferred by a human, and how does that relate to (and differ from) the true structure of a piece? To tackle these questions quantitatively, we present a framework to study the information conveyed in a musical piece by constructing and analyzing networks formed by notes (nodes) and their transitions (edges). Using this framework, we analyze music composed by J. S. Bach through the lens of network science and information theory. Regarded as one of the greatest composers in the Western music tradition, Bach's work is highly mathematically structured and spans a wide range of compositional forms, such as fugues and choral pieces. Conceptualizing each composition as a network of note transitions, we quantify the information contained in each piece and find that different kinds of compositions can be grouped together according to their information content and network structure. Moreover, we find that the music networks communicate large amounts of information while maintaining small deviations of the inferred network from the true network, suggesting that they are structured for efficient communication of information. We probe the network structures that enable this rapid and efficient communication of information--namely, high heterogeneity and strong clustering. Taken together, our findings shed new light on the information and network properties of Bach's compositions. More generally, our framework serves as a stepping stone for exploring musical complexities, creativity and the structure of information in a range of complex systems.Comment: 22 pages, 13 figure; discussion in section IV and VII expanded, references added, results unchange

    Intrinsically motivated graph exploration using network theories of human curiosity

    Full text link
    Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.Comment: 14 pages, 5 figures in main text, and 15 pages, 8 figures in supplemen

    Human Learning of Hierarchical Graphs

    Full text link
    Humans are constantly exposed to sequences of events in the environment. Those sequences frequently evince statistical regularities, such as the probabilities with which one event transitions to another. Collectively, inter-event transition probabilities can be modeled as a graph or network. Many real-world networks are organized hierarchically and understanding how humans learn these networks is an ongoing aim of current investigations. While much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We investigate how humans learn hierarchical graphs of the Sierpi\'nski family using computer simulations and behavioral laboratory experiments. We probe the mental estimates of transition probabilities via the surprisal effect: a phenomenon in which humans react more slowly to less expected transitions, such as those between communities or modules in the network. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions. Surprisal effects at coarser levels of the hierarchy are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100100), we replicate our predictions by detecting a surprisal effect at the finer-level of the hierarchy but not at the coarser-level of the hierarchy. To further explain our findings, we evaluate the presence of a trade-off in learning, whereby humans who learned the finer-level of the hierarchy better tended to learn the coarser-level worse, and vice versa. Our study elucidates the processes by which humans learn hierarchical sequential events. Our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.Comment: 22 pages, 10 figures, 1 tabl

    Drug-resistant focal epilepsy in children is associated with increased modal controllability of the whole brain and epileptogenic regions

    Get PDF
    Network control theory provides a framework by which neurophysiological dynamics of the brain can be modelled as a function of the structural connectome constructed from diffusion MRI. Average controllability describes the ability of a region to drive the brain to easy-to-reach neurophysiological states whilst modal controllability describes the ability of a region to drive the brain to difficult-to-reach states. In this study, we identify increases in mean average and modal controllability in children with drug-resistant epilepsy compared to healthy controls. Using simulations, we purport that these changes may be a result of increased thalamocortical connectivity. At the node level, we demonstrate decreased modal controllability in the thalamus and posterior cingulate regions. In those undergoing resective surgery, we also demonstrate increased modal controllability of the resected parcels, a finding specific to patients who were rendered seizure free following surgery. Changes in controllability are a manifestation of brain network dysfunction in epilepsy and may be a useful construct to understand the pathophysiology of this archetypical network disease. Understanding the mechanisms underlying these controllability changes may also facilitate the design of network-focussed interventions that seek to normalise network structure and function

    Disorganization of language and working memory systems in frontal versus temporal lobe epilepsy

    Get PDF
    Cognitive impairment is a common comorbidity of epilepsy, and adversely impacts people with both frontal lobe epilepsy (FLE) and temporal lobe epilepsy (TLE). While its neural substrates have been extensively investigated in TLE, functional imaging studies in FLE are scarce. In this study, we profiled the neural processes underlying cognitive impairment in FLE, and directly compared FLE and TLE to establish commonalities and differences. We investigated 172 adult participants (56 with FLE, 64 with TLE, and 52 controls) using neuropsychological tests and four functional MRI tasks probing expressive language (verbal fluency, verb generation) and working memory (verbal and visuo-spatial). Patient groups were comparable in disease duration and anti-seizure medication load. We devise a multiscale approach to map brain activation and deactivation during cognition, and track reorganization in FLE and TLE. Voxel-based analyses were complemented with profiling of task effects across established motifs of functional brain organization: (i) canonical resting-state functional systems, and (ii) the principal functional connectivity gradient, which encodes a continuous transition of regional connectivity profiles, anchoring lower-level sensory and transmodal brain areas at the opposite ends of a spectrum. We show that cognitive impairment in FLE is associated with reduced activation across attentional and executive systems, and reduced deactivation of the default mode system, indicative of a large-scale disorganization of task-related recruitment. The imaging signatures of dysfunction in FLE were broadly similar to those in TLE, but some patterns were syndrome-specific: altered default-mode deactivation was more prominent in FLE, while impaired recruitment of posterior language areas during a task with semantic demands was more marked in TLE. Functional abnormalities in FLE and TLE appeared overall modulated by disease load. On balance, our study elucidates neural processes underlying language and working memory impairment in FLE, identifies shared and syndrome-specific alterations in the two most common focal epilepsies, and sheds light on system behavior that may be amenable to future remediation strategies

    Uncovering the Biological Basis of Control Energy: Structural and Metabolic Correlates of Energy Inefficiency in Temporal Lobe Epilepsy

    Get PDF
    Network control theory is increasingly used to profile the brain\u27s energy landscape via simulations of neural dynamics. This approach estimates the control energy required to simulate the activation of brain circuits based on structural connectome measured using diffusion magnetic resonance imaging, thereby quantifying those circuits\u27 energetic efficiency. The biological basis of control energy, however, remains unknown, hampering its further application. To fill this gap, investigating temporal lobe epilepsy as a lesion model, we show that patients require higher control energy to activate the limbic network than healthy volunteers, especially ipsilateral to the seizure focus. The energetic imbalance between ipsilateral and contralateral temporolimbic regions is tracked by asymmetric patterns of glucose metabolism measured using positron emission tomography, which, in turn, may be selectively explained by asymmetric gray matter loss as evidenced in the hippocampus. Our investigation provides the first theoretical framework unifying gray matter integrity, metabolism, and energetic generation of neural dynamics

    Interpretability with full complexity by constraining feature information

    Full text link
    Interpretability is a pressing issue for machine learning. Common approaches to interpretable machine learning constrain interactions between features of the input, rendering the effects of those features on a model's output comprehensible but at the expense of model complexity. We approach interpretability from a new angle: constrain the information about the features without restricting the complexity of the model. Borrowing from information theory, we use the Distributed Information Bottleneck to find optimal compressions of each feature that maximally preserve information about the output. The learned information allocation, by feature and by feature value, provides rich opportunities for interpretation, particularly in problems with many features and complex feature interactions. The central object of analysis is not a single trained model, but rather a spectrum of models serving as approximations that leverage variable amounts of information about the inputs. Information is allocated to features by their relevance to the output, thereby solving the problem of feature selection by constructing a learned continuum of feature inclusion-to-exclusion. The optimal compression of each feature -- at every stage of approximation -- allows fine-grained inspection of the distinctions among feature values that are most impactful for prediction. We develop a framework for extracting insight from the spectrum of approximate models and demonstrate its utility on a range of tabular datasets.Comment: project page: https://distributed-information-bottleneck.github.i

    Synchronization of coupled Kuramoto oscillators competing for resources

    Full text link
    Populations of oscillators are present throughout nature. Very often synchronization is observed in such populations if they are allowed to interact. A paradigmatic model for the study of such phenomena has been the Kuramoto model. However, considering real oscillations are rarely isochronous as a function of energy, it is natural to extend the model by allowing the natural frequencies to vary as a function of some dynamical resource supply. Beyond just accounting for a dynamical supply of resources, however, competition over a \emph{shared} resource supply is important in a variety of biological systems. In neuronal systems, for example, resource competition enables the study of neural activity via fMRI. It is reasonable to expect that this dynamical resource allocation should have consequences for the synchronization behavior of the brain. This paper presents a modified Kuramoto dynamics which includes additional dynamical terms that provide a relatively simple model of resource competition among populations of Kuramoto oscillators. We design a mutlilayer system which highlights the impact of the competition dynamics, and we show that in this designed system, correlations can arise between the synchronization states of two populations of oscillators which share no phase-coupling edges. These correlations are interesting in light of the often observed variance between functional and structural connectivity measures in real systems. The model presented here then suggests that some of the observed discrepancy may be explained by the way in which the brain dynamically allocates resources to different regions according to demand. If true, models such as this one provide a theoretical framework for analyzing the differences between structural and functional measures, and possibly implicate dynamical resource allocation as an integral part of the neural computation process.Comment: 12 pages, 2 figure

    Learning Dynamic Graphs, Too Slow

    Full text link
    The structure of knowledge is commonly described as a network of key concepts and semantic relations between them. A learner of a particular domain can discover this network by navigating the nodes and edges presented by instructional material, such as a textbook, workbook, or other text. While over a long temporal period such exploration processes are certain to discover the whole connected network, little is known about how the learning is affected by the dual pressures of finite study time and human mental errors. Here we model the learning of linear algebra textbooks with finite length random walks over the corresponding semantic networks. We show that if a learner does not keep up with the pace of material presentation, the learning can be an order of magnitude worse than it is in the asymptotic limit. Further, we find that this loss is compounded by three types of mental errors: forgetting, shuffling, and reinforcement. Broadly, our study informs the design of teaching materials from both structural and temporal perspectives.Comment: 29 RevTeX pages, 13 figure
    corecore