15 research outputs found
Managing Network Delay for Browser Multiplayer Games
Latency is one of the key performance elements affecting the quality of experience (QoE) in computer games. Latency in the context of games can be defined as the time between the user input and the result on the screen. In order for the QoE to be satisfactory the game needs to be able to react fast enough to player input. In networked multiplayer games, latency is composed of network delay and local delays. Some major sources of network delay are queuing delay and head-of-line (HOL) blocking delay. Network delay in the Internet can be even in the order of seconds.
In this thesis we discuss what feasible networking solutions exist for browser multiplayer games. We conduct a literature study to analyze the Differentiated Services architecture, some salient Active Queue Management (AQM) algorithms (RED, PIE, CoDel and FQ-CoDel), the Explicit Congestion Notification (ECN) concept and network protocols for web browser (WebSocket, QUIC and WebRTC).
RED, PIE and CoDel as single-queue implementations would be sub-optimal for providing low latency to game traffic. FQ-CoDel is a multi-queue AQM and provides flow separation that is able to prevent queue-building bulk transfers from notably hampering latency-sensitive flows.
WebRTC Data-Channel seems promising for games since it can be used for sending arbitrary application data and it can avoid HOL blocking. None of the network protocols, however, provide completely satisfactory support for the transport needs of multiplayer games: WebRTC is not designed for client-server connections, QUIC is not designed for traffic patterns typical for multiplayer games and WebSocket would require parallel connections to mitigate the effects of HOL blocking
Peer-to-peer network architecture for massive online gaming
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014.Virtual worlds and massive multiplayer online games are amongst the most popular applications on the
Internet. In order to host these applications a reliable architecture is required. It is essential for the
architecture to handle high user loads, maintain a complex game state, promptly respond to game interactions,
and prevent cheating, amongst other properties. Many of today’s Massive Multiplayer Online
Games (MMOG) use client-server architectures to provide multiplayer service. Clients (players) send
their actions to a server. The latter calculates the game state and publishes the information to the clients.
Although the client-server architecture has been widely adopted in the past for MMOG, it suffers from
many limitations. First, applications based on a client-server architecture are difficult to support and
maintain given the dynamic user base of online games. Such architectures do not easily scale (or handle
heavy loads). Also, the server constitutes a single point of failure. We argue that peer-to-peer architectures
can provide better support for MMOG. Peer-to-peer architectures can enable the user base to scale
to a large number. They also limit disruptions experienced by players due to other nodes failing.
This research designs and implements a peer-to-peer architecture for MMOG. The peer-to-peer architecture
aims at reducing message latency over the network and on the application layer. We refine the
communication between nodes in the architecture to reduce network latency by using SPDY, a protocol
designed to reduce web page load time. For the application layer, an event-driven paradigm was used to
process messages. Through user load simulation, we show that our peer-to-peer design is able to process
and reliably deliver messages in a timely manner. Furthermore, by distributing the work conducted by a
game server, our research shows that a peer-to-peer architecture responds quicker to requests compared
to client-server models
A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions
The Internet has made several giant leaps over the years, from a fixed to a
mobile Internet, then to the Internet of Things, and now to a Tactile Internet.
The Tactile Internet goes far beyond data, audio and video delivery over fixed
and mobile networks, and even beyond allowing communication and collaboration
among things. It is expected to enable haptic communication and allow skill set
delivery over networks. Some examples of potential applications are
tele-surgery, vehicle fleets, augmented reality and industrial process
automation. Several papers already cover many of the Tactile Internet-related
concepts and technologies, such as haptic codecs, applications, and supporting
technologies. However, none of them offers a comprehensive survey of the
Tactile Internet, including its architectures and algorithms. Furthermore, none
of them provides a systematic and critical review of the existing solutions. To
address these lacunae, we provide a comprehensive survey of the architectures
and algorithms proposed to date for the Tactile Internet. In addition, we
critically review them using a well-defined set of requirements and discuss
some of the lessons learned as well as the most promising research directions
Detection of Network Attacks Based on NetFlow Data
V současné době stále pokračuje dlouhodobý trend nárůstu kyberkriminality takřka po celém světě. Tato práce se zabývá stále sílící problematikou bezpečnosti síťového provozu, konkrétně detekcí útoků. V rámci práce je navržen program pro detekci anomálií na síti na základě NetFlow dat, za účelem důkladnější ochrany běžných uživatelů. Program je realizován metodou TCM-KNN využívající statistických odlišností útoků, čímž umožňuje zaznamenat i jejich nové, dříve neviděné instanceWith rising popularity of the internet there is also rising number of people misusing it. This thesis analyzes the problem of network attack detection based on NetFlow data. A program is designed to point out anomalous behaviour by analyzing the flow records using data mining techniques. The method of TCM-KNN utilizing the fact that attacks statistically deviate is implemented. Thus even new types of attacks are detected
Improving latency for interactive, thin-stream applications over reliable transport
A large number of network services use IP and reliable transport protocols. For applications with constant pressure of data, loss is handled satisfactorily, even if the application is latencysensitive. For applications with data streams consisting of intermittently sent small packets, users experience extreme latencies more frequently. Due to the fact that such thin-stream applications are commonly interactive and time-dependent, increased delay may severely reduce the experienced quality of the application. When TCP is used for thin-stream applications, events of highly increased latency are common, caused by the way retransmissions are handled. Other transport protocols that are deployed in the Internet, like SCTP, model their congestion control and reliability on TCP, as do many frameworks that provide reliability on top of unreliable transport. We have tested several application- and transport layer solutions, and based on our findings, we propose sender-side enhancements that reduce the application-layer latency in a manner that is compatible with unmodified receivers. We have implemented the mechanisms as modifications to the Linux kernel, both for TCP and SCTP. The mechanisms are dynamically triggered so that they are only active when the kernel identifies the stream as thin. To evaluate the performance of our modifications, we have conducted a wide range of experiments using replayed thin-stream traces captured from real applications as well as artificially generated thin-stream data patterns. From the experiments, effects on latency, redundancy and fairness were evaluated. The analysis of the performed experiments shows great improvements in latency for thin streams when applying the modifications. Surveys where users evaluate their experience of several applications’ quality using the modified transport mechanisms confirmed the improvements seen in the statistical analysis. The positive effects of our modifications were shown to be possible without notable effects on fairness for competing streams. We therefore conclude that it is advisable to handle thin streams separately, using our modifications, when transmitting over reliable protocols to reduce retransmission latency
Profiling Large-scale Live Video Streaming and Distributed Applications
PhDToday, distributed applications run at data centre and Internet scales, from intensive data
analysis, such as MapReduce; to the dynamic demands of a worldwide audience, such
as YouTube. The network is essential to these applications at both scales. To provide
adequate support, we must understand the full requirements of the applications, which
are revealed by the workloads. In this thesis, we study distributed system applications
at different scales to enrich this understanding.
Large-scale Internet applications have been studied for years, such as social networking
service (SNS), video on demand (VoD), and content delivery networks (CDN). An
emerging type of video broadcasting on the Internet featuring crowdsourced live video
streaming has garnered attention allowing platforms such as Twitch to attract over 1
million concurrent users globally. To better understand Twitch, we collected real-time
popularity data combined with metadata about the contents and found the broadcasters
rather than the content drives its popularity. Unlike YouTube and Netflix where content
can be cached, video streaming on Twitch is generated instantly and needs to be
delivered to users immediately to enable real-time interaction. Thus, we performed a
large-scale measurement of Twitchs content location revealing the global footprint of its
infrastructure as well as discovering the dynamic stream hosting and client redirection
strategies that helped Twitch serve millions of users at scale.
We next consider applications that run inside the data centre. Distributed computing
applications heavily rely on the network due to data transmission needs and the scheduling
of resources and tasks. One successful application, called Hadoop, has been widely
deployed for Big Data processing. However, little work has been devoted to understanding
its network. We found the Hadoop behaviour is limited by hardware resources and
processing jobs presented. Thus, after characterising the Hadoop traffic on our testbed
with a set of benchmark jobs, we built a simulator to reproduce Hadoops job traffic
With the simulator, users can investigate the connections between Hadoop traffic and
network performance without additional hardware cost. Different network components
can be added to investigate the performance, such as network topologies, queue policies,
and transport layer protocols.
In this thesis, we extended the knowledge of networking by investigated two widelyused
applications in the data centre and at Internet scale. We (i)studied the most
popular live video streaming platform Twitch as a new type of Internet-scale distributed
application revealing that broadcaster factors drive the popularity of such platform,
and we (ii)discovered the footprint of Twitch streaming infrastructure and the dynamic
stream hosting and client redirection strategies to provide an in-depth example of video
streaming delivery occurring at the Internet scale, also we (iii)investigated the traffic
generated by a distributed application by characterising the traffic of Hadoop under
various parameters, (iv)with such knowledge, we built a simulation tool so users can
efficiently investigate the performance of different network components under distributed
applicationQueen Mary University of Londo
Building the Future Internet through FIRE
The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate