12,106 research outputs found

    A new method to measure complexity in binary or weighted networks and applications to functional connectivity in the human brain

    Get PDF
    BACKGROUND: Networks or graphs play an important role in the biological sciences. Protein interaction networks and metabolic networks support the understanding of basic cellular mechanisms. In the human brain, networks of functional or structural connectivity model the information-flow between cortex regions. In this context, measures of network properties are needed. We propose a new measure, Ndim, estimating the complexity of arbitrary networks. This measure is based on a fractal dimension, which is similar to recently introduced box-covering dimensions. However, box-covering dimensions are only applicable to fractal networks. The construction of these network-dimensions relies on concepts proposed to measure fractality or complexity of irregular sets in [Formula: see text]. RESULTS: The network measure Ndim grows with the proliferation of increasing network connectivity and is essentially determined by the cardinality of a maximum k-clique, where k is the characteristic path length of the network. Numerical applications to lattice-graphs and to fractal and non-fractal graph models, together with formal proofs show, that Ndim estimates a dimension of complexity for arbitrary graphs. Box-covering dimensions for fractal graphs rely on a linear log-log plot of minimum numbers of covering subgraph boxes versus the box sizes. We demonstrate the affinity between Ndim and the fractal box-covering dimensions but also that Ndim extends the concept of a fractal dimension to networks with non-linear log-log plots. Comparisons of Ndim with topological measures of complexity (cost and efficiency) show that Ndim has larger informative power. Three different methods to apply Ndim to weighted networks are finally presented and exemplified by comparisons of functional brain connectivity of healthy and depressed subjects. CONCLUSION: We introduce a new measure of complexity for networks. We show that Ndim has the properties of a dimension and overcomes several limitations of presently used topological and fractal complexity-measures. It allows the comparison of the complexity of networks of different type, e.g., between fractal graphs characterized by hub repulsion and small world graphs with strong hub attraction. The large informative power and a convenient computational CPU-time for moderately sized networks may make Ndim a valuable tool for the analysis of biological networks

    Lyapunov spectral analysis of a nonequilibrium Ising-like transition

    Full text link
    By simulating a nonequilibrium coupled map lattice that undergoes an Ising-like phase transition, we show that the Lyapunov spectrum and related dynamical quantities such as the dimension correlation length~ξδ\xi_\delta are insensitive to the onset of long-range ferromagnetic order. As a function of lattice coupling constant~gg and for certain lattice maps, the Lyapunov dimension density and other dynamical order parameters go through a minimum. The occurrence of this minimum as a function of~gg depends on the number of nearest neighbors of a lattice point but not on the lattice symmetry, on the lattice dimensionality or on the position of the Ising-like transition. In one-space dimension, the spatial correlation length associated with magnitude fluctuations and the length~ξδ\xi_\delta are approximately equal, with both varying linearly with the radius of the lattice coupling.Comment: 29 pages of text plus 15 figures, uses REVTeX macros. Submitted to Phys. Rev. E

    Estimation of instrinsic dimension via clustering

    Full text link
    The problem of estimating the intrinsic dimension of a set of points in high dimensional space is a critical issue for a wide range of disciplines, including genomics, finance, and networking. Current estimation techniques are dependent on either the ambient or intrinsic dimension in terms of computational complexity, which may cause these methods to become intractable for large data sets. In this paper, we present a clustering-based methodology that exploits the inherent self-similarity of data to efficiently estimate the intrinsic dimension of a set of points. When the data satisfies a specified general clustering condition, we prove that the estimated dimension approaches the true Hausdorff dimension. Experiments show that the clustering-based approach allows for more efficient and accurate intrinsic dimension estimation compared with all prior techniques, even when the data does not conform to obvious self-similarity structure. Finally, we present empirical results which show the clustering-based estimation allows for a natural partitioning of the data points that lie on separate manifolds of varying intrinsic dimension

    Beyond Blobs in Percolation Cluster Structure: The Distribution of 3-Blocks at the Percolation Threshold

    Full text link
    The incipient infinite cluster appearing at the bond percolation threshold can be decomposed into singly-connected ``links'' and multiply-connected ``blobs.'' Here we decompose blobs into objects known in graph theory as 3-blocks. A 3-block is a graph that cannot be separated into disconnected subgraphs by cutting the graph at 2 or fewer vertices. Clusters, blobs, and 3-blocks are special cases of kk-blocks with k=1k=1, 2, and 3, respectively. We study bond percolation clusters at the percolation threshold on 2-dimensional square lattices and 3-dimensional cubic lattices and, using Monte-Carlo simulations, determine the distribution of the sizes of the 3-blocks into which the blobs are decomposed. We find that the 3-blocks have fractal dimension d3=1.2±0.1d_3=1.2\pm 0.1 in 2D and 1.15±0.11.15\pm 0.1 in 3D. These fractal dimensions are significantly smaller than the fractal dimensions of the blobs, making possible more efficient calculation of percolation properties. Additionally, the closeness of the estimated values for d3d_3 in 2D and 3D is consistent with the possibility that d3d_3 is dimension independent. Generalizing the concept of the backbone, we introduce the concept of a ``kk-bone'', which is the set of all points in a percolation system connected to kk disjoint terminal points (or sets of disjoint terminal points) by kk disjoint paths. We argue that the fractal dimension of a kk-bone is equal to the fractal dimension of kk-blocks, allowing us to discuss the relation between the fractal dimension of kk-blocks and recent work on path crossing probabilities.Comment: All but first 2 figs. are low resolution and are best viewed when printe

    Karhunen-Lo`eve Decomposition of Extensive Chaos

    Full text link
    We show that the number of KLD (Karhunen-Lo`eve decomposition) modes D_KLD(f) needed to capture a fraction f of the total variance of an extensively chaotic state scales extensively with subsystem volume V. This allows a correlation length xi_KLD(f) to be defined that is easily calculated from spatially localized data. We show that xi_KLD(f) has a parametric dependence similar to that of the dimension correlation length and demonstrate that this length can be used to characterize high-dimensional inhomogeneous spatiotemporal chaos.Comment: 12 pages including 4 figures, uses REVTeX macros. To appear in Phys. Rev. Let

    Two-dimensional matrix algorithm using detrended fluctuation analysis to distinguish Burkitt and diffuse large B-cell lymphoma

    Get PDF
    Copyright © 2012 Rong-Guan Yeh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.A detrended fluctuation analysis (DFA) method is applied to image analysis. The 2-dimensional (2D) DFA algorithms is proposed for recharacterizing images of lymph sections. Due to Burkitt lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL), there is a significant different 5-year survival rates after multiagent chemotherapy. Therefore, distinguishing the difference between BL and DLBCL is very important. In this study, eighteen BL images were classified as group A, which have one to five cytogenetic changes. Ten BL images were classified as group B, which have more than five cytogenetic changes. Both groups A and B BLs are aggressive lymphomas, which grow very fast and require more intensive chemotherapy. Finally, ten DLBCL images were classified as group C. The short-term correlation exponent α1 values of DFA of groups A, B, and C were 0.370 ± 0.033, 0.382 ± 0.022, and 0.435 ± 0.053, respectively. It was found that α1 value of BL image was significantly lower (P < 0.05) than DLBCL. However, there is no difference between the groups A and B BLs. Hence, it can be concluded that α1 value based on DFA statistics concept can clearly distinguish BL and DLBCL image.National Science Council (NSC) of Taiwan the Center for Dynamical Biomarkers and Translational Medicine, National Central University, Taiwan (also sponsored by National Science Council)

    Entropy-scaling search of massive biological data

    Get PDF
    Many datasets exhibit a well-defined structure that can be exploited to design faster search tools, but it is not always clear when such acceleration is possible. Here, we introduce a framework for similarity search based on characterizing a dataset's entropy and fractal dimension. We prove that searching scales in time with metric entropy (number of covering hyperspheres), if the fractal dimension of the dataset is low, and scales in space with the sum of metric entropy and information-theoretic entropy (randomness of the data). Using these ideas, we present accelerated versions of standard tools, with no loss in specificity and little loss in sensitivity, for use in three domains---high-throughput drug screening (Ammolite, 150x speedup), metagenomics (MICA, 3.5x speedup of DIAMOND [3,700x BLASTX]), and protein structure search (esFragBag, 10x speedup of FragBag). Our framework can be used to achieve "compressive omics," and the general theory can be readily applied to data science problems outside of biology.Comment: Including supplement: 41 pages, 6 figures, 4 tables, 1 bo

    Quantification of miRNAs and Their Networks in the light of Integral Value Transformations

    Get PDF
    MicroRNAs (miRNAs) which are on average only 21-25 nucleotides long are key post-transcriptional regulators of gene expression in metazoans and plants. A proper quantitative understanding of miRNAs is required to comprehend their structures, functions, evolutions etc. In this paper, the nucleotide strings of miRNAs of three organisms namely Homo sapiens (hsa), Macaca mulatta (mml) and Pan troglodytes (ptr) have been quantified and classified based on some characterizing features. A network has been built up among the miRNAs for these three organisms through a class of discrete transformations namely Integral Value Transformations (IVTs), proposed by Sk. S. Hassan et al [1, 2]. Through this study we have been able to nullify or justify one given nucleotide string as a miRNA. This study will help us to recognize a given nucleotide string as a probable miRNA, without the requirement of any conventional biological experiment. This method can be amalgamated with the existing analysis pipelines, for small RNA sequencing data (designed for finding novel miRNA). This method would provide more confidence and would make the current analysis pipeline more efficient in predicting the probable candidates of miRNA for biological validation and filter out the improbable candidates
    corecore