235 research outputs found

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Load shedding in network monitoring applications

    Get PDF
    Monitoring and mining real-time network data streams are crucial operations for managing and operating data networks. The information that network operators desire to extract from the network traffic is of different size, granularity and accuracy depending on the measurement task (e.g., relevant data for capacity planning and intrusion detection are very different). To satisfy these different demands, a new class of monitoring systems is emerging to handle multiple and arbitrary monitoring applications. Such systems must inevitably cope with the effects of continuous overload situations due to the large volumes, high data rates and bursty nature of the network traffic. These overload situations can severely compromise the accuracy and effectiveness of monitoring systems, when their results are most valuable to network operators. In this thesis, we propose a technique called load shedding as an effective and low-cost alternative to over-provisioning in network monitoring systems. It allows these systems to handle efficiently overload situations in the presence of multiple, arbitrary and competing monitoring applications. We present the design and evaluation of a predictive load shedding scheme that can shed excess load in front of extreme traffic conditions and maintain the accuracy of the monitoring applications within bounds defined by end users, while assuring a fair allocation of computing resources to non-cooperative applications. The main novelty of our scheme is that it considers monitoring applications as black boxes, with arbitrary (and highly variable) input traffic and processing cost. Without any explicit knowledge of the application internals, the proposed scheme extracts a set of features from the traffic streams to build an on-line prediction model of the resource requirements of each monitoring application, which is used to anticipate overload situations and control the overall resource usage by sampling the input packet streams. This way, the monitoring system preserves a high degree of flexibility, increasing the range of applications and network scenarios where it can be used. Since not all monitoring applications are robust against sampling, we then extend our load shedding scheme to support custom load shedding methods defined by end users, in order to provide a generic solution for arbitrary monitoring applications. Our scheme allows the monitoring system to safely delegate the task of shedding excess load to the applications and still guarantee fairness of service with non-cooperative users. We implemented our load shedding scheme in an existing network monitoring system and deployed it in a research ISP network. We present experimental evidence of the performance and robustness of our system with several concurrent monitoring applications during long-lived executions and using real-world traffic traces.Postprint (published version

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Machine Learning and Big Data Methodologies for Network Traffic Monitoring

    Get PDF
    Over the past 20 years, the Internet saw an exponential grown of traffic, users, services and applications. Currently, it is estimated that the Internet is used everyday by more than 3.6 billions users, who generate 20 TB of traffic per second. Such a huge amount of data challenge network managers and analysts to understand how the network is performing, how users are accessing resources, how to properly control and manage the infrastructure, and how to detect possible threats. Along with mathematical, statistical, and set theory methodologies machine learning and big data approaches have emerged to build systems that aim at automatically extracting information from the raw data that the network monitoring infrastructures offer. In this thesis I will address different network monitoring solutions, evaluating several methodologies and scenarios. I will show how following a common workflow, it is possible to exploit mathematical, statistical, set theory, and machine learning methodologies to extract meaningful information from the raw data. Particular attention will be given to machine learning and big data methodologies such as DBSCAN, and the Apache Spark big data framework. The results show that despite being able to take advantage of mathematical, statistical, and set theory tools to characterize a problem, machine learning methodologies are very useful to discover hidden information about the raw data. Using DBSCAN clustering algorithm, I will show how to use YouLighter, an unsupervised methodology to group caches serving YouTube traffic into edge-nodes, and latter by using the notion of Pattern Dissimilarity, how to identify changes in their usage over time. By using YouLighter over 10-month long races, I will pinpoint sudden changes in the YouTube edge-nodes usage, changes that also impair the end users’ Quality of Experience. I will also apply DBSCAN in the deployment of SeLINA, a self-tuning tool implemented in the Apache Spark big data framework to autonomously extract knowledge from network traffic measurements. By using SeLINA, I will show how to automatically detect the changes of the YouTube CDN previously highlighted by YouLighter. Along with these machine learning studies, I will show how to use mathematical and set theory methodologies to investigate the browsing habits of Internauts. By using a two weeks dataset, I will show how over this period, the Internauts continue discovering new websites. Moreover, I will show that by using only DNS information to build a profile, it is hard to build a reliable profiler. Instead, by exploiting mathematical and statistical tools, I will show how to characterize Anycast-enabled CDNs (A-CDNs). I will show that A-CDNs are widely used either for stateless and stateful services. That A-CDNs are quite popular, as, more than 50% of web users contact an A-CDN every day. And that, stateful services, can benefit of A-CDNs, since their paths are very stable over time, as demonstrated by the presence of only a few anomalies in their Round Trip Time. Finally, I will conclude by showing how I used BGPStream an open-source software framework for the analysis of both historical and real-time Border Gateway Protocol (BGP) measurement data. By using BGPStream in real-time mode I will show how I detected a Multiple Origin AS (MOAS) event, and how I studies the black-holing community propagation, showing the effect of this community in the network. Then, by using BGPStream in historical mode, and the Apache Spark big data framework over 16 years of data, I will show different results such as the continuous growth of IPv4 prefixes, and the growth of MOAS events over time. All these studies have the aim of showing how monitoring is a fundamental task in different scenarios. In particular, highlighting the importance of machine learning and of big data methodologies

    Energy efficiency in LEO satellite and terrestrial wired environments

    Get PDF
    To meet an ever-growing demand for advanced multimedia services and to support electronic connectivity anywhere on the planet, development of ubiquitous broadband multimedia systems is gaining a huge interest at both academic and industry levels. Satellite networks in general and LEO satellite constellations in particular will play an essential role in the deployment of such systems. Therefore, as LEO satellite constellations like Iridium or IridiumNEXT are extremely expensive to deploy and maintain, extending their service lifetimes is of crucial importance. In the main part of this thesis, we propose different techniques for extending satellite service life in LEO satellite constellations. Satellites in such constellations can spend over 30% of their time under the earth’s umbra, time during which they are powered by batteries. While the batteries are recharged by solar energy, the Depth of Discharge (DoD) they reach during eclipse significantly affects their lifetime – and by extension, the service life of the satellites themselves. For batteries of the type that power Iridium and Iridium-NEXT satellites, a 15% increase to the DoD can practically cut their service lives in half. We first focus on routing and propose two new routing metrics – LASER and SLIM – that try to strike a balance between performance and battery DoD in LEO satellite constellations. Our basic approach is to leverage the deterministic movement of satellites for favoring routing traffic over satellites exposed to the sun as opposed to the eclipsed satellites, thereby decreasing the average battery DoD– all without taking a significant penalty in performance. Then, we deal with resource consolidation – a new paradigm for the reduction of the power consumption. It consists in having a carefully selected subset of network links entering a sleep state, and use the rest to transport the required amount of traffic. This possible without causing major disruptions to network activities. Since communication networks are designed over the peak traffic periods, and with redundancy and over-provisioned in mind. As a remedy to these issues, we propose two different methods to perform resource consolidation in LEO networks. First, we propose trafficaware metric for quantifiying the quality of a frugal topology, the Maximum Link Utilization (MLU). With the problem being NP-hard subject to a given MLU threshold, we introduce two heuristics, BASIC and SNAP, which represent different tradeoffs in terms of performance and simplicity. Second, we propose a new lightweight traffic-agnostic metric for quantifiying the quality of a frugal topology, the Adequacy Index (ADI). After showing that the problem of minimizing the power consumption of a LEO network subject to a given ADI threshold is NP-hard, we propose a heuristc named AvOId to solve it. We evaluate both forms of resource consolidation using realistic LEO topologies and traffic requests. The results show that the simple schemes we develop are almost double the satellite batteries lifetime. Following the green networking in LEO systems, the second part of this thesis focuses on extending the resource consolidation schemes to current wired networks. Indeed, the energy consumption of wired networks has been traditionally overlooked. Several studies exhibit that the traffic load of the routers only has a small influence on their energy consumption. Hence, the power consumption in networks is strongly related to the number of active network elements. In this context, we extend the traffic-agnostic metric, ADI, to the wired networks. We model the problem subject to ADI threshold as NP-hard. Then, we propose two polynomial time heuristics – ABStAIn and CuTBAck. Although ABStAIn and CuTBAck are traffic unaware, we assess their behavior under real traffic loads from 3 networks, demonstrating that their performance are comparable to the more complex traffic-aware solutions proposed in the literature

    Effects of Local Latency on Games

    Get PDF
    Video games are a major type of entertainment for millions of people, and feature a wide variety genres. Many genres of video games require quick reactions, and in these games it is critical for player performance and player experience that the game is responsive. One of the major contributing factors that can make games less responsive is local latency — the total delay between input and a resulting change to the screen. Local latency is produced by a combination of delays from input devices, software processing, and displays. Due to latency, game companies spend considerable time and money play-testing their games to ensure the game is both responsive and that the in-game difficulty is reasonable. Past studies have made it clear that local latency negatively affects both player performance and experience, but there is still little knowledge about local latency’s exact effects on games. In this thesis, we address this problem by providing game designers with more knowledge about local latency’s effects. First, we performed a study to examine latency’s effects on performance and experience for popular pointing input devices used with games. Our results show significant differences between devices based on the task and the amount of latency. We then provide design guidelines based on our findings. Second, we performed a study to understand latency’s effects on ‘atoms’ of interaction in games. The study varied both latency and game speed, and found game speed to affect a task’s sensitivity to latency. Third, we used our findings to build a model to help designers quickly identify latency-sensitive game atoms, thus saving time during play-testing. We built and validated a model that predicts errors rates in a game atom based on latency and game speed. Our work helps game designers by providing new insight into latency’s varied effects and by modelling and predicting those effect
    corecore