17 research outputs found

    DiPerF: an automated DIstributed PERformance testing Framework

    Full text link
    We present DiPerF, a distributed performance testing framework, aimed at simplifying and automating service performance evaluation. DiPerF coordinates a pool of machines that test a target service, collects and aggregates performance metrics, and generates performance statistics. The aggregate data collected provide information on service throughput, on service "fairness" when serving multiple clients concurrently, and on the impact of network latency on service performance. Furthermore, using this data, it is possible to build predictive models that estimate a service performance given the service load. We have tested DiPerF on 100+ machines on two testbeds, Grid3 and PlanetLab, and explored the performance of job submission services (pre WS GRAM and WS GRAM) included with Globus Toolkit 3.2.Comment: 8 pages, 8 figures, will appear in IEEE/ACM Grid2004, November 200

    Network Performance Evaluation within the Web Browser Sandbox

    Get PDF
    With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing “speedtest” services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies

    HostView: Annotating end-host performance measurements with user feedback

    Get PDF
    International audienceNetwork disruptions can adversely impact a users' web browsing, cause video/audio interruptions, or render web sites and services unreachable. Such problems are frustrating to Internet users, who are oblivious to the underlying problems, but completely exposed to the service degrada- tions. Ideally users' end systems would have diagnostic tools that can automatically detect, diagnose and possibly repair, performance degradations. Hopefully, this can be done without user intervention. Clearly, the first step for any such (end-host) diagnostic tool is a methodology to automatically detect performance degradations in the network that can affect a user's perception of application performance

    Characterizing end-host application performance across multiple networking environments

    Get PDF
    International audienceUsers today connect to the Internet everywhere - from home, work, airports, friend's homes, and more. This paper characterizes how the performance of networked applications varies across networking environments. Using data from a few dozen end-hosts, we compare the distributions of RTTs and download rates across pairs of environments. We illustrate that for most users the performance difference is statistically significant. We contrast the influence of the application mix and environmental factors on these performance differences

    Network Analysis with Stochastic Grammars

    Get PDF
    Digital forensics requires significant manual effort to identify items of evidentiary interest from the ever-increasing volume of data in modern computing systems. One of the tasks digital forensic examiners conduct is mentally extracting and constructing insights from unstructured sequences of events. This research assists examiners with the association and individualization analysis processes that make up this task with the development of a Stochastic Context -Free Grammars (SCFG) knowledge representation for digital forensics analysis of computer network traffic. SCFG is leveraged to provide context to the low-level data collected as evidence and to build behavior profiles. Upon discovering patterns, the analyst can begin the association or individualization process to answer criminal investigative questions. Three contributions resulted from this research. First , domain characteristics suitable for SCFG representation were identified and a step -by- step approach to adapt SCFG to novel domains was developed. Second, a novel iterative graph-based method of identifying similarities in context-free grammars was developed to compare behavior patterns represented as grammars. Finally, the SCFG capabilities were demonstrated in performing association and individualization in reducing the suspect pool and reducing the volume of evidence to examine in a computer network traffic analysis use case

    Making broadband access networks transparent to researchers, developers, and users

    Get PDF
    Broadband networks are used by hundreds of millions of users to connect to the Internet today. However, most ISPs are hesitant to reveal details about their network deployments,and as a result the characteristics of broadband networks are often not known to users,developers, and researchers. In this thesis, we make progress towards mitigating this lack of transparency in broadband access networks in two ways. First, using novel measurement tools we performed the first large-scale study of thecharacteristics of broadband networks. We found that broadband networks have very different characteristics than academic networks. We also developed Glasnost, a system that enables users to test their Internet access links for traffic differentiation. Glasnost has been used by more than 350,000 users worldwide and allowed us to study ISPs' traffic management practices. We found that ISPs increasingly throttle or even block traffic from popular applications such as BitTorrent. Second, we developed two new approaches to enable realistic evaluation of networked systems in broadband networks. We developed Monarch, a tool that enables researchers to study and compare the performance of new and existing transport protocols at large scale in broadband environments. Furthermore, we designed SatelliteLab, a novel testbed that can easily add arbitrary end nodes, including broadband nodes and even smartphones, to existing testbeds like PlanetLab.Breitbandanschlüsse werden heute von hunderten Millionen Nutzern als Internetzugang verwendet. Jedoch geben die meisten ISPs nur ungern über Details ihrer Netze Auskunft und infolgedessen sind Nutzern, Anwendungsentwicklern und Forschern oft deren Eigenheiten nicht bekannt. Ziel dieser Dissertation ist es daher Breitbandnetze transparenter zu machen. Mit Hilfe neuartiger Messwerkzeuge konnte ich die erste groß angelegte Studie über die Besonderheiten von Breitbandnetzen durchführen. Dabei stellte sich heraus, dass Breitbandnetze und Forschungsnetze sehr unterschiedlich sind. Mit Glasnost habe ich ein System entwickelt, das mehr als 350.000 Nutzern weltweit ermöglichte ihren Internetanschluss auf den Einsatz von Verkehrsmanagement zu testen. Ich konnte dabei zeigen, dass ISPs zunehmend BitTorrent Verkehr drosseln oder gar blockieren. Meine Studien zeigten dar überhinaus, dass existierende Verfahren zum Testen von Internetsystemen nicht die typischen Eigenschaften von Breitbandnetzen berücksichtigen. Ich ging dieses Problem auf zwei Arten an: Zum einen entwickelte ich Monarch, ein Werkzeug mit dem das Verhalten von Transport-Protokollen über eine große Anzahl von Breitbandanschlüssen untersucht und verglichen werden kann. Zum anderen habe ich SatelliteLab entworfen, eine neuartige Testumgebung, die, anders als zuvor, beliebige Internetknoten, einschließlich Breitbandknoten und sogar Handys, in bestehende Testumgebungen wie PlanetLab einbinden kann

    How\u27s My Network - Incentives and Impediments of Home Network Measurements

    Get PDF
    Gathering meaningful information from Home Networking (HN) environments has presented researchers with measurement strategy challenges. A measurement platform is typically designed around the process of gathering data from a range of devices or usage statistics in a network that are specifically behind the HN firewall. HN studies require a fine balance between incentives and impediments to promote usage and minimize efforts for user participation with the focus on gathering robust datasets and results. In this dissertation we explore how to gather data from the HN Ecosystem (e.g. devices, apps, permissions, configurations) and feedback from HN users across a multitude of HN infrastructures, leveraging low impediment and low/high incentive methods to entice user participation. We look to understand the trade-offs of using a variety of approach types (e.g. Java Applet, Mobile app, survey) for data collections, user preferences, and how HN users react and make changes to the HN environment when presented with privacy/security concerns, norms of comparisons (e.g. comparisons to the local environment and to other HNs) and other HN results. We view that the HN Ecosystem is more than just “the network” as it also includes devices and apps within the HN. We have broken this dissertation down into the following three pillars of work to understand incentives and impediments of user participation and data collections. These pillars include: 1) preliminary work, as part of the How\u27s My Network (HMN) measurement platform, a deployed signed Java applet that provided a user-centered network measurement platform to minimize user impediments for data collection, 2) a HN user survey on preference, comfort, and usability of HNs to understand incentives, and 3) the creation and deployment of a multi-faceted How\u27s My Network Mobile app tool to gather and compare attributes and feedback with high incentives for user participation; as part of this flow we also include related approaches and background work. The HMN Java applet work demonstrated the viability of using a Web browser to obtain network performance data from HNs via a user-centric network measurement platform that minimizes impediments for user participation. The HMN HN survey work found that users prefer to leverage a Mobile app for HN data collections, and can be incentivized to participate in a HN study by providing attributes and characteristics of the HN Ecosystem. The HMN Mobile app was found to provide high incentives, with minimal impediments, for participation with focus on user Privacy and Security concerns. The HMN Mobile app work found that 84\% of users reported a change in perception of privacy and security, 32\% of users uninstalled apps, and 24\% revoked permissions in their HN. As a by-product of this work we found it was possible to gather sensitive information such as previously attached networks, installed apps and devices on the network. This information exposure to any installed app with minimal or no granted permissions is a potential privacy concern

    Transport Control Protocol (TCP) over Optical Burst Switched Networks

    Get PDF
    Transport Control Protocol (TCP) is the dominant protocol in modern communication networks, in which the issues of reliability, flow, and congestion control must be handled efficiently. This thesis studies the impact of the next-generation bufferless optical burst-switched (OBS) networks on the performance of TCP congestion-control implementations (i.e., dropping-based, explicit-notification-based, and delay-based). The burst contention phenomenon caused by the buffer-less nature of OBS occurs randomly and has a negative impact on dropping-based TCP since it causes a false indication of network congestion that leads to improper reaction on a burst drop event. In this thesis we study the impact of these random burst losses on dropping-based TCP throughput. We introduce a novel congestion control scheme for TCP over OBS networks, called Statistical Additive Increase Multiplicative Decrease (SAIMD). SAIMD maintains and analyzes a number of previous round trip times (RTTs) at the TCP senders in order to identify the confidence with which a packet-loss event is due to network congestion. The confidence is derived by positioning short-term RTT in the spectrum of long-term historical RTTs. The derived confidence corresponding to the packet loss is then taken in to account by the policy developed for TCP congestion-window adjustment. For explicit-notification TCP, we propose a new TCP implementation over OBS networks, called TCP with Explicit Burst Loss Contention Notification (TCP-BCL). We examine the throughput performance of a number of representative TCP implementations over OBS networks, and analyze the TCP performance degradation due to the misinterpretation of timeout and packet-loss events. We also demonstrate that the proposed TCP-BCL scheme can counter the negative effect of OBS burst losses and is superior to conventional TCP architectures in OBS networks. For delay-based TCP, we observe that this type of TCP implementation cannot detect network congestion when deployed over typical OBS networks since RTT fluctuations are minor. Also, delay-based TCP can suffer from falsely detecting network congestion when the underlying OBS network provides burst retransmission and/or deflection. Due to the fact that burst retransmission and deflection schemes introduce additional delays for bursts that are retransmitted or deflected, TCP cannot determine whether this sudden delay is due to network congestion or simply to burst recovery at the OBS layer. In this thesis we study the behaviour of delay-based TCP Vegas over OBS networks, and propose a version of threshold-based TCP Vegas that is suitable for the characteristics of OBS networks. The threshold-based TCP Vegas is able to distinguish increases in packet delay due to network congestion from burst contention at low traffic loads. The evolution of OBS technology is highly coupled with its ability to support upper-layer applications. Without fully understanding the burst transmission behaviour and the associated impact on the TCP congestion-control mechanism, it will be difficult to exploit the advantages of OBS networks fully

    Analysis of Passive End-to-End Network Performance Measurements

    Get PDF
    NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.Ph.D.Committee Chair: Riley, George; Committee Member: Copeland, John; Committee Member: Fujimoto, Richard; Committee Member: Juang, Biing Hwang; Committee Member: Owen, Henr
    corecore