151,611 research outputs found

    QoS Simulation and analysis of HTTP over LEO satellite constellation

    Get PDF
    In this paper, we present an end-to-end QoS simulation studies on internetworking of remote LAN and long range communications over LEO-Iridium satellites constellation taking SuperJARING network in Malaysia as an example. A macro level network simulation scenario based on actual network topology in Malaysia is implemented as Diffserv network model using the network simulator-2 (NS-2). Web traffic (HTTP) is used as the internet traffic models in the simulation analysis. All simulations are carried out in error-free and link-loss environment. In error-free simulations, the accumulative network traffic loads are varied from 20%, 50% and 80% while in link�loss environment simulations only 20% traffic load is used with bit error rate (BER) varied from 1x10-5 , 1x10-4 and 2x10-4 . We compare the empirical TCP throughput traces with analytical model for validation. The results show clearly that QoS can be achieved with IP Diffserv over satellites constellation like Iridium

    Agents, Bookmarks and Clicks: A topical model of Web traffic

    Full text link
    Analysis of aggregate and individual Web traffic has shown that PageRank is a poor model of how people navigate the Web. Using the empirical traffic patterns generated by a thousand users, we characterize several properties of Web traffic that cannot be reproduced by Markovian models. We examine both aggregate statistics capturing collective behavior, such as page and link traffic, and individual statistics, such as entropy and session size. No model currently explains all of these empirical observations simultaneously. We show that all of these traffic patterns can be explained by an agent-based model that takes into account several realistic browsing behaviors. First, agents maintain individual lists of bookmarks (a non-Markovian memory mechanism) that are used as teleportation targets. Second, agents can retreat along visited links, a branching mechanism that also allows us to reproduce behaviors such as the use of a back button and tabbed browsing. Finally, agents are sustained by visiting novel pages of topical interest, with adjacent pages being more topically related to each other than distant ones. This modulates the probability that an agent continues to browse or starts a new session, allowing us to recreate heterogeneous session lengths. The resulting model is capable of reproducing the collective and individual behaviors we observe in the empirical data, reconciling the narrowly focused browsing patterns of individual users with the extreme heterogeneity of aggregate traffic measurements. This result allows us to identify a few salient features that are necessary and sufficient to interpret the browsing patterns observed in our data. In addition to the descriptive and explanatory power of such a model, our results may lead the way to more sophisticated, realistic, and effective ranking and crawling algorithms.Comment: 10 pages, 16 figures, 1 table - Long version of paper to appear in Proceedings of the 21th ACM conference on Hypertext and Hypermedi

    Fluctuation-driven capacity distribution in complex networks

    Full text link
    Maximizing robustness and minimizing cost are common objectives in the design of infrastructure networks. However, most infrastructure networks evolve and operate in a highly decentralized fashion, which may significantly impact the allocation of resources across the system. Here, we investigate this question by focusing on the relation between capacity and load in different types of real-world communication and transportation networks. We find strong empirical evidence that the actual capacity of the network elements tends to be similar to the maximum available capacity, if the cost is not strongly constraining. As more weight is given to the cost, however, the capacity approaches the load nonlinearly. In particular, all systems analyzed show larger unoccupied portions of the capacities on network elements subjected to smaller loads, which is in sharp contrast with the assumptions involved in (linear) models proposed in previous theoretical studies. We describe the observed behavior of the capacity-load relation as a function of the relative importance of the cost by using a model that optimizes capacities to cope with network traffic fluctuations. These results suggest that infrastructure systems have evolved under pressure to minimize local failures, but not necessarily global failures that can be caused by the spread of local damage through cascading processes

    Robust Anomaly Detection in Dynamic Networks

    Get PDF
    We propose two robust methods for anomaly detection in dynamic networks in which the properties of normal traffic are time-varying. We formulate the robust anomaly detection problem as a binary composite hypothesis testing problem and propose two methods: a model-free and a model-based one, leveraging techniques from the theory of large deviations. Both methods require a family of Probability Laws (PLs) that represent normal properties of traffic. We devise a two-step procedure to estimate this family of PLs. We compare the performance of our robust methods and their vanilla counterparts, which assume that normal traffic is stationary, on a network with a diurnal normal pattern and a common anomaly related to data exfiltration. Simulation results show that our robust methods perform better than their vanilla counterparts in dynamic networks.Comment: 6 pages. MED conferenc

    Traffic measurement and analysis

    Get PDF
    Measurement and analysis of real traffic is important to gain knowledge about the characteristics of the traffic. Without measurement, it is impossible to build realistic traffic models. It is recent that data traffic was found to have self-similar properties. In this thesis work traffic captured on the network at SICS and on the Supernet, is shown to have this fractal-like behaviour. The traffic is also examined with respect to which protocols and packet sizes are present and in what proportions. In the SICS trace most packets are small, TCP is shown to be the predominant transport protocol and NNTP the most common application. In contrast to this, large UDP packets sent between not well-known ports dominates the Supernet traffic. Finally, characteristics of the client side of the WWW traffic are examined more closely. In order to extract useful information from the packet trace, web browsers use of TCP and HTTP is investigated including new features in HTTP/1.1 such as persistent connections and pipelining. Empirical probability distributions are derived describing session lengths, time between user clicks and the amount of data transferred due to a single user click. These probability distributions make up a simple model of WWW-sessions

    Network anomaly detection: a survey and comparative analysis of stochastic and deterministic methods

    Get PDF
    7 pages. 1 more figure than final CDC 2013 versionWe present five methods to the problem of network anomaly detection. These methods cover most of the common techniques in the anomaly detection field, including Statistical Hypothesis Tests (SHT), Support Vector Machines (SVM) and clustering analysis. We evaluate all methods in a simulated network that consists of nominal data, three flow-level anomalies and one packet-level attack. Through analyzing the results, we point out the advantages and disadvantages of each method and conclude that combining the results of the individual methods can yield improved anomaly detection results

    Monitoring Challenges and Approaches for P2P File-Sharing Systems

    Get PDF
    Since the release of Napster in 1999, P2P file-sharing has enjoyed a dramatic rise in popularity. A 2000 study by Plonka on the University of Wisconsin campus network found that file-sharing accounted for a comparable volume of traffic to HTTP, while a 2002 study by Saroiu et al. on the University of Washington campus network found that file-sharing accounted for more than treble the volume of Web traffic observed, thus affirming the significance of P2P in the context of Internet traffic. Empirical studies of P2P traffic are essential for supporting the design of next-generation P2P systems, informing the provisioning of network infrastructure and underpinning the policing of P2P systems. The latter is of particular significance as P2P file-sharing systems have been implicated in supporting criminal behaviour including copyright infringement and the distribution of illegal pornograph
    • …
    corecore