5,277 research outputs found

    Grammar-Based Geodesics in Semantic Networks

    Full text link
    A geodesic is the shortest path between two vertices in a connected network. The geodesic is the kernel of various network metrics including radius, diameter, eccentricity, closeness, and betweenness. These metrics are the foundation of much network research and thus, have been studied extensively in the domain of single-relational networks (both in their directed and undirected forms). However, geodesics for single-relational networks do not translate directly to multi-relational, or semantic networks, where vertices are connected to one another by any number of edge labels. Here, a more sophisticated method for calculating a geodesic is necessary. This article presents a technique for calculating geodesics in semantic networks with a focus on semantic networks represented according to the Resource Description Framework (RDF). In this framework, a discrete "walker" utilizes an abstract path description called a grammar to determine which paths to include in its geodesic calculation. The grammar-based model forms a general framework for studying geodesic metrics in semantic networks.Comment: First draft written in 200

    A Content-based Centrality Metric for Collaborative Caching in Information-Centric Fogs

    Get PDF
    Information-Centric Fog Computing enables a multitude of nodes near the end-users to provide storage, communication, and computing, rather than in the cloud. In a fog network, nodes connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within that network topology. The centrality is then used to distinguish nodes in the fog network, or to prioritize some nodes over others to participate in the caching fog. We argue that, for an Information-Centric Fog Computing approach, graph centrality is not an appropriate metric. Indeed, a node with low connectivity that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce acontent-based centrality (CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements CBC on three instances of large scale realistic network topology comprising 2,896 nodes with three content replication levels. Results shows that CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate

    The Network Analysis of Urban Streets: A Primal Approach

    Full text link
    The network metaphor in the analysis of urban and territorial cases has a long tradition especially in transportation/land-use planning and economic geography. More recently, urban design has brought its contribution by means of the "space syntax" methodology. All these approaches, though under different terms like accessibility, proximity, integration,connectivity, cost or effort, focus on the idea that some places (or streets) are more important than others because they are more central. The study of centrality in complex systems,however, originated in other scientific areas, namely in structural sociology, well before its use in urban studies; moreover, as a structural property of the system, centrality has never been extensively investigated metrically in geographic networks as it has been topologically in a wide range of other relational networks like social, biological or technological. After two previous works on some structural properties of the dual and primal graph representations of urban street networks (Porta et al. cond-mat/0411241; Crucitti et al. physics/0504163), in this paper we provide an in-depth investigation of centrality in the primal approach as compared to the dual one, with a special focus on potentials for urban design.Comment: 19 page, 4 figures. Paper related to the paper "The Network Analysis of Urban Streets: A Dual Approach" cond-mat/041124

    Network depth: identifying median and contours in complex networks

    Full text link
    Centrality descriptors are widely used to rank nodes according to specific concept(s) of importance. Despite the large number of centrality measures available nowadays, it is still poorly understood how to identify the node which can be considered as the `centre' of a complex network. In fact, this problem corresponds to finding the median of a complex network. The median is a non-parametric and robust estimator of the location parameter of a probability distribution. In this work, we present the most natural generalisation of the concept of median to the realm of complex networks, discussing its advantages for defining the centre of the system and percentiles around that centre. To this aim, we introduce a new statistical data depth and we apply it to networks embedded in a geometric space induced by different metrics. The application of our framework to empirical networks allows us to identify median nodes which are socially or biologically relevant

    Fast Shortest Path Distance Estimation in Large Networks

    Full text link
    We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.Yahoo! Research (internship
    • …
    corecore