5,637 research outputs found

    Multilayer Complex Network Descriptors for Color-Texture Characterization

    Full text link
    A new method based on complex networks is proposed for color-texture analysis. The proposal consists on modeling the image as a multilayer complex network where each color channel is a layer, and each pixel (in each color channel) is represented as a network vertex. The network dynamic evolution is accessed using a set of modeling parameters (radii and thresholds), and new characterization techniques are introduced to capt information regarding within and between color channel spatial interaction. An automatic and adaptive approach for threshold selection is also proposed. We conduct classification experiments on 5 well-known datasets: Vistex, Usptex, Outex13, CURet and MBT. Results among various literature methods are compared, including deep convolutional neural networks with pre-trained architectures. The proposed method presented the highest overall performance over the 5 datasets, with 97.7 of mean accuracy against 97.0 achieved by the ResNet convolutional neural network with 50 layers.Comment: 20 pages, 7 figures and 4 table

    Weighted and unweighted network of amino acids within protein

    Full text link
    The information regarding the structure of a single protein is encoded in the network of interacting amino acids. Considering each protein as a weighted and unweighted network of amino acids we have analyzed a total of forty nine protein structures that covers the three branches of life on earth. Our results show that the probability degree distribution of network connectivity follows Poisson's distribution; whereas the probability strength distribution does not follow any known distribution. However, the average strength of amino acid node depends on its degree (k). For some of the proteins, the strength of a node increases linearly with k. On the other hand, for a set of other proteins, although the strength increases linaerly with k for smaller values of k, we have not obtained any clear functional relationship of strength with degree at higher values of k. The results also show that the weight of the amino acid nodes belonging to the highly connected nodes tend to have a higher value. The result that the average clustering coefficient of weighted network is less than that of unweighted network implies that the topological clustering is generated by edges with low weights. The ratio of average clustering coefficients of protein network to that of the corresponding classical random network varies linearly with the number (N) of amino acids of a protein; whereas the ratio of characteristic path lengths varies logarithmically with N. The power law behaviour of clustering coefficients of weighted and unweighted network as a function of degree k indicates that the network has a signature of hierarchical network. It has also been observed that the network is of assortative type

    The Architecture of a Novel Weighted Network: Knowledge Network

    Full text link
    Networked structure emerged from a wide range of fields such as biological systems, World Wide Web and technological infrastructure. A deeply insight into the topological complexity of these networks has been gained. Some works start to pay attention to the weighted network, like the world-wide airport network and the collaboration network, where links are not binary, but have intensities. Here, we construct a novel knowledge network, through which we take the first step to uncover the topological structure of the knowledge system. Furthermore, the network is extended to the weighted one by assigning weights to the edges. Thus, we also investigate the relationship between the intensity of edges and the topological structure. These results provide a novel description to understand the hierarchies and organizational principles in knowledge system, and the interaction between the intensity of edges and topological structure. This system also provides a good paradigm to study weighted networks.Comment: 5 figures 11 page

    Greedy Strategy Works for k-Center Clustering with Outliers and Coreset Construction

    Get PDF
    We study the problem of k-center clustering with outliers in arbitrary metrics and Euclidean space. Though a number of methods have been developed in the past decades, it is still quite challenging to design quality guaranteed algorithm with low complexity for this problem. Our idea is inspired by the greedy method, Gonzalez\u27s algorithm, for solving the problem of ordinary k-center clustering. Based on some novel observations, we show that this greedy strategy actually can handle k-center clustering with outliers efficiently, in terms of clustering quality and time complexity. We further show that the greedy approach yields small coreset for the problem in doubling metrics, so as to reduce the time complexity significantly. Our algorithms are easy to implement in practice. We test our method on both synthetic and real datasets. The experimental results suggest that our algorithms can achieve near optimal solutions and yield lower running times comparing with existing methods

    Scalable Online Betweenness Centrality in Evolving Graphs

    Full text link
    Betweenness centrality is a classic measure that quantifies the importance of a graph element (vertex or edge) according to the fraction of shortest paths passing through it. This measure is notoriously expensive to compute, and the best known algorithm runs in O(nm) time. The problems of efficiency and scalability are exacerbated in a dynamic setting, where the input is an evolving graph seen edge by edge, and the goal is to keep the betweenness centrality up to date. In this paper we propose the first truly scalable algorithm for online computation of betweenness centrality of both vertices and edges in an evolving graph where new edges are added and existing edges are removed. Our algorithm is carefully engineered with out-of-core techniques and tailored for modern parallel stream processing engines that run on clusters of shared-nothing commodity hardware. Hence, it is amenable to real-world deployment. We experiment on graphs that are two orders of magnitude larger than previous studies. Our method is able to keep the betweenness centrality measures up to date online, i.e., the time to update the measures is smaller than the inter-arrival time between two consecutive updates.Comment: 15 pages, 9 Figures, accepted for publication in IEEE Transactions on Knowledge and Data Engineerin
    • …
    corecore