308,891 research outputs found

    Uncovering missing links with cold ends

    Get PDF
    To evaluate the performance of prediction of missing links, the known data are randomly divided into two parts, the training set and the probe set. We argue that this straightforward and standard method may lead to terrible bias, since in real biological and information networks, missing links are more likely to be links connecting low-degree nodes. We therefore study how to uncover missing links with low-degree nodes, namely links in the probe set are of lower degree products than a random sampling. Experimental analysis on ten local similarity indices and four disparate real networks reveals a surprising result that the Leicht-Holme-Newman index [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)] performs the best, although it was known to be one of the worst indices if the probe set is a random sampling of all links. We further propose an parameter-dependent index, which considerably improves the prediction accuracy. Finally, we show the relevance of the proposed index on three real sampling methods.Comment: 16 pages, 5 figures, 6 table

    Distributed Opportunistic Scheduling for MIMO Ad-Hoc Networks

    Full text link
    Distributed opportunistic scheduling (DOS) protocols are proposed for multiple-input multiple-output (MIMO) ad-hoc networks with contention-based medium access. The proposed scheduling protocols distinguish themselves from other existing works by their explicit design for system throughput improvement through exploiting spatial multiplexing and diversity in a {\em distributed} manner. As a result, multiple links can be scheduled to simultaneously transmit over the spatial channels formed by transmit/receiver antennas. Taking into account the tradeoff between feedback requirements and system throughput, we propose and compare protocols with different levels of feedback information. Furthermore, in contrast to the conventional random access protocols that ignore the physical channel conditions of contending links, the proposed protocols implement a pure threshold policy derived from optimal stopping theory, i.e. only links with threshold-exceeding channel conditions are allowed for data transmission. Simulation results confirm that the proposed protocols can achieve impressive throughput performance by exploiting spatial multiplexing and diversity.Comment: Proceedings of the 2008 IEEE International Conference on Communications, Beijing, May 19-23, 200

    Sensor Networks with Random Links: Topology Design for Distributed Consensus

    Full text link
    In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in determining the probability of error (or of communication failure) in a link. These probabilities are then a proxy for the SNR under which the links operate. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. To consider this problem, we address a number of preliminary issues: (1) model the network as a random topology; (2) establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail; and, in particular, (3) show that a necessary and sufficient condition for both mss and a.s. convergence is for the algebraic connectivity of the mean graph describing the network topology to be strictly positive. With these results, we formulate topology design, subject to random link failures and to a communication cost constraint, as a constrained convex optimization problem to which we apply semidefinite programming techniques. We show by an extensive numerical study that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the asymptotic performance of a non-random network at a fraction of the communication cost.Comment: Submitted to IEEE Transaction

    Simulation based Study of TCP Variants in Hybrid Network

    Get PDF
    © ASEE 2011Transmission control protocol (TCP) was originally designed for fixed networks to provide the reliability of the data delivery. The improvement of TCP performance was also achieved with different types of networks with introduction of new TCP variants. However, there are still many factors that affect performance of TCP. Mobility is one of the major affects on TCP performance in wireless networks and MANET (Mobile Ad Hoc Network). To determine the best TCP variant from mobility point of view, we simulate some TCP variants in real life scenario. This paper addresses the performance of TCP variants such as TCP-Tahoe, TCP-Reno, TCP-New Reno, TCPVegas, TCP-SACK and TCP-Westwood from mobility point of view. The scenarios presented in this paper are supported by Zone routing Protocol (ZRP) with integration of random waypoint mobility model in MANET area. The scenario shows the speed of walking person to a vehicle and suited particularly for mountainous and deserted areas. On the basis of simulation, we analyze Round trip time (RTT) fairness, End-to-End delay, control overhead, number of broken links during the delivery of data. Finally analyzed parameters help to find out the best TCP variant

    Cache timeout strategies for on-demand routing in MANETs

    No full text
    Varying the route caching scheme can significantly change network performance for on-demand routing protocols in mobile ad hoc networks (MANETs). Initial route caching schemes retain paths or links until they are shown to be broken. However, stale routing information can degrade network performance with latency and extra routing overhead. Therefore, more recent caching schemes delete links at some fixed time after they enter the cache. This paper proposes using either the expected path duration or the link residual time as the link cache timeout. These mobility metrics are theoretically calculated for an appropriate random mobility model. Simulation results in NS2 show that both of the proposed link caching schemes can improve network performance in the dynamic source routing protocol (DSR) by reducing dropped data packets, latency and routing overhead, with the link residual time scheme out-performing the path duration scheme.IEEE, South Australian Sectio

    Efficient link cuts in online social networks

    Get PDF
    Due to the huge popularity of online social networks, many researchers focus on adding links, e.g., link prediction to help friend recommendation. So far, no research has been performed on link cuts. However, the spread of malware and misinformation can cause havoc and hence it is interesting to see how to cut links such that malware and misinformation will not run rampant. In fact, many online social networks can be modelled as undirected graphs with nodes represents users and edges stands for relationships between users. In this paper, we investigate different strategies to cut links among different users in undirected graphs so that the speed of virus and misinformation spread can be slowed down the most or even cut off. Our algorithm is very flexible and can be applied to other networks. For example, it can be applied to email networks to stop the spread of viruses and spam emails; it can also be used in neural networks to stop the diffusion of worms and diseases. Two measures are chosen to evaluate the performance of these strategies: Average Inverse of Shortest Path Length (AIPL) and Rumor Saturation Rate (RSR). AIPL measures the communication efficiency of the whole graph while RSR checks the percentage of users receiving information within a certain time interval. Compared to AIPL, RSR is an even better measure as it concentrates on some specific rumors' spread in online networks. Our experiments are performed on both synthetic data and Facebook data. According to the evaluation on the two measures, it turns out that our algorithm performs better than random cuts and different strategies can have better performance in their suitable situations

    Googling the brain: discovering hierarchical and asymmetric network structures, with applications in neuroscience

    Get PDF
    Hierarchical organisation is a common feature of many directed networks arising in nature and technology. For example, a well-defined message-passing framework based on managerial status typically exists in a business organisation. However, in many real-world networks such patterns of hierarchy are unlikely to be quite so transparent. Due to the nature in which empirical data is collated the nodes will often be ordered so as to obscure any underlying structure. In addition, the possibility of even a small number of links violating any overall “chain of command” makes the determination of such structures extremely challenging. Here we address the issue of how to reorder a directed network in order to reveal this type of hierarchy. In doing so we also look at the task of quantifying the level of hierarchy, given a particular node ordering. We look at a variety of approaches. Using ideas from the graph Laplacian literature, we show that a relevant discrete optimization problem leads to a natural hierarchical node ranking. We also show that this ranking arises via a maximum likelihood problem associated with a new range-dependent hierarchical random graph model. This random graph insight allows us to compute a likelihood ratio that quantifies the overall tendency for a given network to be hierarchical. We also develop a generalization of this node ordering algorithm based on the combinatorics of directed walks. In passing, we note that Google’s PageRank algorithm tackles a closely related problem, and may also be motivated from a combinatoric, walk-counting viewpoint. We illustrate the performance of the resulting algorithms on synthetic network data, and on a real-world network from neuroscience where results may be validated biologically
    • …
    corecore