26,253 research outputs found

    Graph analysis of functional brain networks: practical issues in translational neuroscience

    Full text link
    The brain can be regarded as a network: a connected system where nodes, or units, represent different specialized regions and links, or connections, represent communication pathways. From a functional perspective communication is coded by temporal dependence between the activities of different brain areas. In the last decade, the abstract representation of the brain as a graph has allowed to visualize functional brain networks and describe their non-trivial topological properties in a compact and objective way. Nowadays, the use of graph analysis in translational neuroscience has become essential to quantify brain dysfunctions in terms of aberrant reconfiguration of functional brain networks. Despite its evident impact, graph analysis of functional brain networks is not a simple toolbox that can be blindly applied to brain signals. On the one hand, it requires a know-how of all the methodological steps of the processing pipeline that manipulates the input brain signals and extract the functional network properties. On the other hand, a knowledge of the neural phenomenon under study is required to perform physiological-relevant analysis. The aim of this review is to provide practical indications to make sense of brain network analysis and contrast counterproductive attitudes

    Time-Varying Graphs and Dynamic Networks

    Full text link
    The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems -- delay-tolerant networks, opportunistic-mobility networks, social networks -- obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe; and the formal models proposed so far to express some specific concepts are components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms, and results found in the literature into a unified framework, which we call TVG (for time-varying graphs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are a-temporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.Comment: A short version appeared in ADHOC-NOW'11. This version is to be published in Internation Journal of Parallel, Emergent and Distributed System

    Metrics for Graph Comparison: A Practitioner's Guide

    Full text link
    Comparison of graph structure is a ubiquitous task in data analysis and machine learning, with diverse applications in fields such as neuroscience, cyber security, social network analysis, and bioinformatics, among others. Discovery and comparison of structures such as modular communities, rich clubs, hubs, and trees in data in these fields yields insight into the generative mechanisms and functional properties of the graph. Often, two graphs are compared via a pairwise distance measure, with a small distance indicating structural similarity and vice versa. Common choices include spectral distances (also known as λ\lambda distances) and distances based on node affinities. However, there has of yet been no comparative study of the efficacy of these distance measures in discerning between common graph topologies and different structural scales. In this work, we compare commonly used graph metrics and distance measures, and demonstrate their ability to discern between common topological features found in both random graph models and empirical datasets. We put forward a multi-scale picture of graph structure, in which the effect of global and local structure upon the distance measures is considered. We make recommendations on the applicability of different distance measures to empirical graph data problem based on this multi-scale view. Finally, we introduce the Python library NetComp which implements the graph distances used in this work

    Perspective: network-guided pattern formation of neural dynamics

    Full text link
    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics

    A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data

    Full text link
    A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions

    On Directed Feedback Vertex Set parameterized by treewidth

    Get PDF
    We study the Directed Feedback Vertex Set problem parameterized by the treewidth of the input graph. We prove that unless the Exponential Time Hypothesis fails, the problem cannot be solved in time 2o(tlogt)nO(1)2^{o(t\log t)}\cdot n^{\mathcal{O}(1)} on general directed graphs, where tt is the treewidth of the underlying undirected graph. This is matched by a dynamic programming algorithm with running time 2O(tlogt)nO(1)2^{\mathcal{O}(t\log t)}\cdot n^{\mathcal{O}(1)}. On the other hand, we show that if the input digraph is planar, then the running time can be improved to 2O(t)nO(1)2^{\mathcal{O}(t)}\cdot n^{\mathcal{O}(1)}.Comment: 20

    Transport Processes on Homogeneous Planar Graphs with Scale-Free Loops

    Full text link
    We consider the role of network geometry in two types of diffusion processes: transport of constant-density information packets with queuing on nodes, and constant voltage-driven tunneling of electrons. The underlying network is a homogeneous graph with scale-free distribution of loops, which is constrained to a planar geometry and fixed node connectivity k=3k=3. We determine properties of noise, flow and return-times statistics for both processes on this graph and relate the observed differences to the microscopic process details. Our main findings are: (i) Through the local interaction between packets queuing at the same node, long-range correlations build up in traffic streams, which are practically absent in the case of electron transport; (ii) Noise fluctuations in the number of packets and in the number of tunnelings recorded at each node appear to obey the scaling laws in two distinct universality classes; (iii) The topological inhomogeneity of betweenness plays the key role in the occurrence of broad distributions of return times and in the dynamic flow. The maximum-flow spanning trees are characteristic for each process type.Comment: 14 pages, 5 figure
    corecore