15 research outputs found
Web Workload Generation According to the UniLoG Approach
Generating synthetic loads which are suffciently close to reality represents an important and challenging task in performance and quality-of-service (QoS) evaluations of computer networks and distributed systems. Here, the load to be generated represents sequences of requests at a well-defined service interface within a network node. The paper presents a tool (UniLoG.HTTP) which can be used in a flexible manner to generate realistic and representative server and network loads, in terms of access requests to Web servers as well as creation of typical Web traffic within a communication network. The paper describes the architecture of this load generator, the critical design decisions and solution approaches which allowed us to obtain the desired flexibility
FROM WIDE- TO SHORT-RANGE COMMUNICATIONS: USING HUMAN INTERACTIONS TO DESIGN NEW MOBILE SYSTEMS AND SERVICES
The widespread diffusion of mobile devices has radically changed the way people interact with each other and with object of their daily life. In particular, modern mobile devices are equipped with multiple radio interfaces allowing users to interact at different spatial granularities according to the various radio technology they use. The research community is progressively moving to heterogeneous network solutions which include many different wireless technologies seamlessly integrated to address a wide variety of use cases and requirements. In 5th- Generation (5G) of mobile network we can find multiple network typology such as device-to-device (D2D), vehicular networks, machine-to-machine(M2M), and more, which are integrated in the existing mobile-broadband technology such as LTE and its future evolutions. In this complex and rich scenario, many issues and challenges are still open from a technological, architectural, and mobile services and applications points of view. In this work we provide network solutions, mobile services, and applications consistent with the 5G mobile network vision by using users interactions as a common starting point. We focus on three different spatial granularities, long, medium/short, and micro mediated by cellular network, Wi-Fi, and NFC radio technologies, respectively. We deal with various kinds of issues and challenges according to the distinct spatial granularity we consider. We start with an user centric approach based on the analysis of the characteristics and the peculiarities of each kind of interaction. Following this path, we provide contributions to support the design of new network architectures, and the development of novel mobile services and applications
Network delay control through adaptive queue management
Timeliness in delivering packets for delay-sensitive applications is an important QoS (Quality of Service) measure in many systems, notably those that need to provide real-time performance. In such systems, if delay-sensitive traffic is delivered to the destination beyond the deadline, then the packets will be rendered useless and dropped after received at the destination. Bandwidth that is already scarce and shared between network nodes is wasted in relaying these expired packets. This thesis proposes that a deterministic per-hop delay can be achieved by using a dynamic queue threshold concept to bound delay of each node. A deterministic per-hop delay is a key component in guaranteeing a deterministic end-to-end delay. The research aims to develop a generic approach that can constrain network delay of delay-sensitive traffic in a dynamic network. Two adaptive queue management schemes, namely, DTH (Dynamic THreshold) and ADTH (Adaptive DTH) are proposed to realize the claim. Both DTH and ADTH use the dynamic threshold concept to constrain queuing delay so that bounded average queuing delay can be achieved for the former and bounded maximum nodal delay can be achieved for the latter. DTH is an analytical approach, which uses queuing theory with superposition of N MMBP-2 (Markov Modulated Bernoulli Process) arrival processes to obtain a mapping relationship between average queuing delay and an appropriate queuing threshold, for queue management. While ADTH is an measurement-based algorithmic approach that can respond to the time-varying link quality and network dynamics in wireless ad hoc networks to constrain network delay. It manages a queue based on system performance measurements and feedback of error measured against a target delay requirement. Numerical analysis and Matlab simulation have been carried out for DTH for the purposes of validation and performance analysis. While ADTH has been evaluated in NS-2 simulation and implemented in a multi-hop wireless ad hoc network testbed for performance analysis. Results show that DTH and ADTH can constrain network delay based on the specified delay requirements, with higher packet loss as a trade-off
JTIT
kwartalni
Online learning on the programmable dataplane
This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observationsāand argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the networkā runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network.
To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasibleāto port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
Profiling Large-scale Live Video Streaming and Distributed Applications
PhDToday, distributed applications run at data centre and Internet scales, from intensive data
analysis, such as MapReduce; to the dynamic demands of a worldwide audience, such
as YouTube. The network is essential to these applications at both scales. To provide
adequate support, we must understand the full requirements of the applications, which
are revealed by the workloads. In this thesis, we study distributed system applications
at different scales to enrich this understanding.
Large-scale Internet applications have been studied for years, such as social networking
service (SNS), video on demand (VoD), and content delivery networks (CDN). An
emerging type of video broadcasting on the Internet featuring crowdsourced live video
streaming has garnered attention allowing platforms such as Twitch to attract over 1
million concurrent users globally. To better understand Twitch, we collected real-time
popularity data combined with metadata about the contents and found the broadcasters
rather than the content drives its popularity. Unlike YouTube and Netflix where content
can be cached, video streaming on Twitch is generated instantly and needs to be
delivered to users immediately to enable real-time interaction. Thus, we performed a
large-scale measurement of Twitchs content location revealing the global footprint of its
infrastructure as well as discovering the dynamic stream hosting and client redirection
strategies that helped Twitch serve millions of users at scale.
We next consider applications that run inside the data centre. Distributed computing
applications heavily rely on the network due to data transmission needs and the scheduling
of resources and tasks. One successful application, called Hadoop, has been widely
deployed for Big Data processing. However, little work has been devoted to understanding
its network. We found the Hadoop behaviour is limited by hardware resources and
processing jobs presented. Thus, after characterising the Hadoop traffic on our testbed
with a set of benchmark jobs, we built a simulator to reproduce Hadoops job traffic
With the simulator, users can investigate the connections between Hadoop traffic and
network performance without additional hardware cost. Different network components
can be added to investigate the performance, such as network topologies, queue policies,
and transport layer protocols.
In this thesis, we extended the knowledge of networking by investigated two widelyused
applications in the data centre and at Internet scale. We (i)studied the most
popular live video streaming platform Twitch as a new type of Internet-scale distributed
application revealing that broadcaster factors drive the popularity of such platform,
and we (ii)discovered the footprint of Twitch streaming infrastructure and the dynamic
stream hosting and client redirection strategies to provide an in-depth example of video
streaming delivery occurring at the Internet scale, also we (iii)investigated the traffic
generated by a distributed application by characterising the traffic of Hadoop under
various parameters, (iv)with such knowledge, we built a simulation tool so users can
efficiently investigate the performance of different network components under distributed
applicationQueen Mary University of Londo