601 research outputs found

    Potential gain as a centrality measure

    Get PDF
    Navigability is a distinctive features of graphs associated with artificial or natural systems whose primary goal is the transportation of information or goods. We say that a graph G is navigable when an agent is able to efficiently reach any target node in G by means of local routing decisions. In a social network navigability translates to the ability of reaching an individual through personal contacts. Graph navigability is well-studied, but a fundamental question is still open: why are some individuals more likely than others to be reached via short, friend-of-a-friend, communication chains? In this article we answer the question above by proposing a novel centrality metric called the {\em potential gain,} which, in an informal sense, quantifies the easiness at which a target node can be reached. We define two variants of the potential gain, called the geometric and the exponential potential gain, and present fast algorithms to compute them. The geometric and the potential gain are the first instances of a novel class of composite centrality metrics, i.e., centrality metrics which combine the popularity of a node in G with its similarity to all other nodes. As shown in previous studies, popularity and similarity are two main criteria which regulate the way humans seek for information in large networks such as Wikipedia. We give a formal proof that the potential gain of a node is always equivalent to the product of its degree centrality (which captures popularity) and its Katz centrality (which captures similarity)

    Finding the most navigable path in road networks

    Get PDF
    Input to the Most Navigable Path (MNP) problem consists of the following: (a) a road network represented as a directed graph, where each edge is associated with numeric attributes of cost and “navigability score” values; (b) a source and a destination and; (c) a budget value which denotes the maximum permissible cost of the solution. Given the input, MNP aims to determine a path between the source and the destination which maximizes the navigability score while constraining its cost to be within the given budget value. The problem can be modeled as the arc orienteering problem which is known to be NP-hard. The current state-of-the-art for this problem may generate paths having loops, and its adaptation for MNP that yields simple paths, was found to be inefficient. In this paper, we propose five novel algorithms for the MNP problem. Our algorithms first compute a seed path from the source to the destination, and then modify the seed path to improve its navigability. We explore two approaches to compute the seed path. For modification of the seed path, we explore different Dynamic Programming based approaches. We also propose an indexing structure for the MNP problem which helps in reducing the running time of some of our algorithms. Our experimental results indicate that the proposed solutions yield comparable or better solutions while being orders of magnitude faster than the current state-of-the-art for large real road networks

    Comparing the hierarchy of keywords in on-line news portals

    Get PDF
    The tagging of on-line content with informative keywords is a widespread phenomenon from scientific article repositories through blogs to on-line news portals. In most of the cases, the tags on a given item are free words chosen by the authors independently. Therefore, relations among keywords in a collection of news items is unknown. However, in most cases the topics and concepts described by these keywords are forming a latent hierarchy, with the more general topics and categories at the top, and more specialised ones at the bottom. Here we apply a recent, cooccurrence-based tag hierarchy extraction method to sets of keywords obtained from four different on-line news portals. The resulting hierarchies show substantial differences not just in the topics rendered as important (being at the top of the hierarchy) or of less interest (categorised low in the hierarchy), but also in the underlying network structure. This reveals discrepancies between the plausible keyword association frameworks in the studied news portals

    Latent Geometry Inspired Graph Dissimilarities Enhance Affinity Propagation Community Detection in Complex Networks

    Full text link
    Affinity propagation is one of the most effective unsupervised pattern recognition algorithms for data clustering in high-dimensional feature space. However, the numerous attempts to test its performance for community detection in complex networks have been attaining results very far from the state of the art methods such as Infomap and Louvain. Yet, all these studies agreed that the crucial problem is to convert the unweighted network topology in a 'smart-enough' node dissimilarity matrix that is able to properly address the message passing procedure behind affinity propagation clustering. Here we introduce a conceptual innovation and we discuss how to leverage network latent geometry notions in order to design dissimilarity matrices for affinity propagation community detection. Our results demonstrate that the latent geometry inspired dissimilarity measures we design bring affinity propagation to equal or outperform current state of the art methods for community detection. These findings are solidly proven considering both synthetic 'realistic' networks (with known ground-truth communities) and real networks (with community metadata), even when the data structure is corrupted by noise artificially induced by missing or spurious connectivity

    Indoor Localization Accuracy Estimation from Fingerprint Data

    Get PDF
    The demand for indoor localization services has led to the development of techniques that create a Fingerprint Map (FM) of sensor signals (e.g., magnetic, Wi-Fi, bluetooth) at designated positions in an indoor space and then use FM as a reference for subsequent localization tasks. With such an approach, it is crucial to assess the quality of the FM before deployment, in a manner disregarding data origin and at any location of interest, so as to provide deployment staff with the information on the quality of localization. Even though FM-based localization algorithms usually provide accuracy estimates during system operation (e.g., visualized as uncertainty circle or ellipse around the user location), they do not provide any information about the expected accuracy before the actual deployment of the localization service. In this paper, we develop a novel frame-work for quality assessment on arbitrary FMs coined ACCES. Our framework comprises a generic interpolation method using Gaussian Processes (GP), upon which a navigability score at any location is derived using the Cramer-Rao Lower Bound (CRLB). Our approach does not rely on the underlying physical model of the fingerprint data. Our extensive experimental study with magnetic FMs, comparing empirical localization accuracy against derived bounds, demonstrates that the navigability score closely matches the accuracy variations users experience.© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. A. Nikitin, C. Laoudias, G. Chatzimilioudis, P. Karras and D. Zeinalipour-Yazti, "Indoor Localization Accuracy Estimation from Fingerprint Data," 2017 18th IEEE International Conference on Mobile Data Management (MDM), Daejeon, 2017, pp. 196-205. doi: 10.1109/MDM.2017.3

    Quality modelling and metrics of Web-based information systems

    Get PDF
    In recent years, the World Wide Web has become a major platform for software applications. Web-based information systems have been involved in many areas of our everyday life, such as education, entertainment, business, manufacturing, communication, etc. As web-based systems are usually distributed, multimedia, interactive and cooperative, and their production processes usually follow ad-hoc approaches, the quality of web-based systems has become a major concern. Existing quality models and metrics do not fully satisfy the needs of quality management of Web-based systems. This study has applied and adapted software quality engineering methods and principles to address the following issues, a quality modeling method for derivation of quality models of Web-based information systems; and the development, implementation and validation of quality metrics of key quality attributes of Web-based information systems, which include navigability and timeliness. The quality modeling method proposed in this study has the following strengths. It is more objective and rigorous than existing approaches. The quality analysis can be conducted in the early stage of system life cycle on the design. It is easy to use and can provide insight into the improvement of the design of systems. Results of case studies demonstrated that the quality modeling method is applicable and practical. Practitioners can use the modeling method to develop their own quality models. This study is amongst the first comprehensive attempts to develop quality measurement for Web-based information systems. First, it identified the relationship between website structural complexity and navigability. Quality metrics of navigability were defined, investigated and implemented. Empirical studies were conducted to evaluate the metrics. Second, this study investigated website timeliness and attempted to find direct and indirect measures for the quality attribute. Empirical studies for validating such metrics were also conducted. This study also suggests four areas of future research that may be fruitful
    • …
    corecore