911 research outputs found

    Emergence of bimodality in controlling complex networks

    Get PDF

    Anisotropic model of kinetic roughening:he strong-coupling regime

    Get PDF
    We study the strong coupling (SC) limit of the anisotropic Kardar-Parisi-Zhang (KPZ) model. A systematic mapping of the continuum model to its lattice equivalent shows that in the SC limit, anisotropic perturbations destroy all spatial correlations but retain a temporal scaling which shows a remarkable crossover along one of the two spatial directions, the choice of direction depending on the relative strength of anisotropicity. The results agree with exact numerics and are expected to settle the long-standing SC problem of a KPZ model in the infinite range limit. © 2007 The American Physical Society

    On the Inability of Markov Models to Capture Criticality in Human Mobility

    Get PDF
    We examine the non-Markovian nature of human mobility by exposing the inability of Markov models to capture criticality in human mobility. In particular, the assumed Markovian nature of mobility was used to establish a theoretical upper bound on the predictability of human mobility (expressed as a minimum error probability limit), based on temporally correlated entropy. Since its inception, this bound has been widely used and empirically validated using Markov chains. We show that recurrent-neural architectures can achieve significantly higher predictability, surpassing this widely used upper bound. In order to explain this anomaly, we shed light on several underlying assumptions in previous research works that has resulted in this bias. By evaluating the mobility predictability on real-world datasets, we show that human mobility exhibits scale-invariant long-range correlations, bearing similarity to a power-law decay. This is in contrast to the initial assumption that human mobility follows an exponential decay. This assumption of exponential decay coupled with Lempel-Ziv compression in computing Fano's inequality has led to an inaccurate estimation of the predictability upper bound. We show that this approach inflates the entropy, consequently lowering the upper bound on human mobility predictability. We finally highlight that this approach tends to overlook long-range correlations in human mobility. This explains why recurrent-neural architectures that are designed to handle long-range structural correlations surpass the previously computed upper bound on mobility predictability

    Fast Computing Betweenness Centrality with Virtual Nodes on Large Sparse Networks

    Get PDF
    Betweenness centrality is an essential index for analysis of complex networks. However, the calculation of betweenness centrality is quite time-consuming and the fastest known algorithm uses time and space for weighted networks, where and are the number of nodes and edges in the network, respectively. By inserting virtual nodes into the weighted edges and transforming the shortest path problem into a breadth-first search (BFS) problem, we propose an algorithm that can compute the betweenness centrality in time for integer-weighted networks, where is the average weight of edges and is the average degree in the network. Considerable time can be saved with the proposed algorithm when , indicating that it is suitable for lightly weighted large sparse networks. A similar concept of virtual node transformation can be used to calculate other shortest path based indices such as closeness centrality, graph centrality, stress centrality, and so on. Numerical simulations on various randomly generated networks reveal that it is feasible to use the proposed algorithm in large network analysis

    Predictive diagnostics and personalized medicine for the prevention of chronic degenerative diseases

    Get PDF
    Progressive increase of mean age and life expectancy in both industrialized and emerging societies parallels an increment of chronic degenerative diseases (CDD) such as cancer, cardiovascular, autoimmune or neurodegenerative diseases among the elderly. CDD are of complex diagnosis, difficult to treat and absorbing an increasing proportion in the health care budgets worldwide. However, recent development in modern medicine especially in genetics, proteomics, and informatics is leading to the discovery of biomarkers associated with different CDD that can be used as indicator of disease’s risk in healthy subjects. Therefore, predictive medicine is merging and medical doctors may for the first time anticipate the deleterious effect of CDD and use markers to identify persons with high risk of developing a given CDD before the clinical manifestation of the diseases. This innovative approach may offer substantial advantages, since the promise of personalized medicine is to preserve individual health in people with high risk by starting early treatment or prevention protocols. The pathway is now open, however the road to an effective personalized medicine is still long, several (diagnostic) predictive instruments for different CDD are under development, some ethical issues have to be solved. Operative proposals for the heath care systems are now needed to verify potential benefits of predictive medicine in the clinical practice. In fact, predictive diagnostics, personalized medicine and personalized therapy have the potential of changing classical approaches of modern medicine to CDD

    Understanding the implementation of evidence-based care: A structural network approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recent study of complex networks has yielded many new insights into phenomenon such as social networks, the internet, and sexually transmitted infections. The purpose of this analysis is to examine the properties of a network created by the 'co-care' of patients within one region of the Veterans Health Affairs.</p> <p>Methods</p> <p>Data were obtained for all outpatient visits from 1 October 2006 to 30 September 2008 within one large Veterans Integrated Service Network. Types of physician within each clinic were nodes connected by shared patients, with a weighted link representing the number of shared patients between each connected pair. Network metrics calculated included edge weights, node degree, node strength, node coreness, and node betweenness. Log-log plots were used to examine the distribution of these metrics. Sizes of k-core networks were also computed under multiple conditions of node removal.</p> <p>Results</p> <p>There were 4,310,465 encounters by 266,710 shared patients between 722 provider types (nodes) across 41 stations or clinics resulting in 34,390 edges. The number of other nodes to which primary care provider nodes have a connection (172.7) is 42% greater than that of general surgeons and two and one-half times as high as cardiology. The log-log plot of the edge weight distribution appears to be linear in nature, revealing a 'scale-free' characteristic of the network, while the distributions of node degree and node strength are less so. The analysis of the k-core network sizes under increasing removal of primary care nodes shows that about 10 most connected primary care nodes play a critical role in keeping the <it>k</it>-core networks connected, because their removal disintegrates the highest <it>k</it>-core network.</p> <p>Conclusions</p> <p>Delivery of healthcare in a large healthcare system such as that of the US Department of Veterans Affairs (VA) can be represented as a complex network. This network consists of highly connected provider nodes that serve as 'hubs' within the network, and demonstrates some 'scale-free' properties. By using currently available tools to explore its topology, we can explore how the underlying connectivity of such a system affects the behavior of providers, and perhaps leverage that understanding to improve quality and outcomes of care.</p

    Colored Motifs Reveal Computational Building Blocks in the C. elegans Brain

    Get PDF
    Background: Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Methodology/Principal Findings: Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm's locomotion. Conclusions/Significance: The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex networks into its functional (rather than topological) motifs as long as both wiring and functional information is available

    Mining Diversity on Social Media Networks

    Get PDF
    The fast development of multimedia technology and increasing availability of network bandwidth has given rise to an abundance of network data as a result of all the ever-booming social media and social websites in recent years, e.g., Flickr, Youtube, MySpace, Facebook, etc. Social network analysis has therefore become a critical problem attracting enthusiasm from both academia and industry. However, an important measure that captures a participant’s diversity in the network has been largely neglected in previous studies. Namely, diversity characterizes how diverse a given node connects with its peers. In this paper, we give a comprehensive study of this concept. We first lay out two criteria that capture the semantic meaning of diversity, and then propose a compliant definition which is simple enough to embed the idea. Based on the approach, we can measure not only a user’s sociality and interest diversity but also a social media’s user diversity. An efficient top-k diversity ranking algorithm is developed for computation on dynamic networks. Experiments on both synthetic and real social media datasets give interesting results, where individual nodes identified with high diversities are intuitive

    Science Models as Value-Added Services for Scholarly Information Systems

    Full text link
    The paper introduces scholarly Information Retrieval (IR) as a further dimension that should be considered in the science modeling debate. The IR use case is seen as a validation model of the adequacy of science models in representing and predicting structure and dynamics in science. Particular conceptualizations of scholarly activity and structures in science are used as value-added search services to improve retrieval quality: a co-word model depicting the cognitive structure of a field (used for query expansion), the Bradford law of information concentration, and a model of co-authorship networks (both used for re-ranking search results). An evaluation of the retrieval quality when science model driven services are used turned out that the models proposed actually provide beneficial effects to retrieval quality. From an IR perspective, the models studied are therefore verified as expressive conceptualizations of central phenomena in science. Thus, it could be shown that the IR perspective can significantly contribute to a better understanding of scholarly structures and activities.Comment: 26 pages, to appear in Scientometric

    Re-sampling strategy to improve the estimation of number of null hypotheses in FDR control under strong correlation structures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>When conducting multiple hypothesis tests, it is important to control the number of false positives, or the False Discovery Rate (FDR). However, there is a tradeoff between controlling FDR and maximizing power. Several methods have been proposed, such as the q-value method, to estimate the proportion of true null hypothesis among the tested hypotheses, and use this estimation in the control of FDR. These methods usually depend on the assumption that the test statistics are independent (or only weakly correlated). However, many types of data, for example microarray data, often contain large scale correlation structures. Our objective was to develop methods to control the FDR while maintaining a greater level of power in highly correlated datasets by improving the estimation of the proportion of null hypotheses.</p> <p>Results</p> <p>We showed that when strong correlation exists among the data, which is common in microarray datasets, the estimation of the proportion of null hypotheses could be highly variable resulting in a high level of variation in the FDR. Therefore, we developed a re-sampling strategy to reduce the variation by breaking the correlations between gene expression values, then using a conservative strategy of selecting the upper quartile of the re-sampling estimations to obtain a strong control of FDR.</p> <p>Conclusion</p> <p>With simulation studies and perturbations on actual microarray datasets, our method, compared to competing methods such as q-value, generated slightly biased estimates on the proportion of null hypotheses but with lower mean square errors. When selecting genes with controlling the same FDR level, our methods have on average a significantly lower false discovery rate in exchange for a minor reduction in the power.</p
    corecore