494 research outputs found
Performance-Engineered Network Overlays for High Quality Interaction in Virtual Worlds
Overlay hosting systems such as PlanetLab, and cloud computing environments such as Amazon’s EC2, provide shared infrastructures within which new applications can be developed and deployed on a global scale. This paper ex-plores how systems of this sort can be used to enable ad-vanced network services and sophisticated applications that use those services to enhance performance and provide a high quality user experience. Specifically, we investigate how advanced overlay hosting environments can be used to provide network services that enable scalable virtual world applications and other large-scale distributed applications requiring consistent, real-time performance. We propose a novel network architecture called Forest built around per-session tree-structured communication channels that we call comtrees. Comtrees are provisioned and support both unicast and multicast packet delivery. The multicast mechanism is designed to be highly scalable and light-weight enough to support the rapid changes to multicast subscriptions needed for efficient support of state updates within virtual worlds. We evaluate performance using a combination of analysis and experimental measurement of a partial system prototype that supports fully functional distributed game sessions. Our results provide the data needed to enable accurate projections of performance for a variety of session and system configurations
Recommended from our members
Architectures and algorithms for dynamic overlay networks
Most of today’s Internet of Things (IoT) applications assume that data will be moved offdevices into centralized cloud platforms. While existing IoT systems leverage cloud-based analytics for meaningful data reasoning, the assumption that data should always be moved off the devices is problematic. The amount of data to be moved from devices over Internet gateways to cloud platforms is huge which potentially make it cost inefficient. In other scenarios, privacy concerns of customers or organizational rules complicate the process of transferring data to third-party data centers.This dissertation proposes architectures and dynamic overlay network algorithms for in-networkand edge processing of data offered by the globally available IoT devices and provides a global platform for meaningful and responsive data analysis and decision making. The proposed techniques shift IoT analytics from a ”collect data now and analyze it later” scenario to directlyproviding meaningful information from the in-network processing of devices data at or near thedevices. The techniques serve future IoT use cases including distributed context awareness, on-demand data analysis, and in-network decision making. The dissertation comprises three main components.The first component is a device management protocol for cloning devices’ data in proximateEdge Computing platforms. Unlike existing application-layer IoT management protocols theproposed protocol uses the LTE LTE-A radio frame structure, device-to-device communication,and IoT data properties to avoid excessive network access latency in existing technologies.The second component realizes distributed IoT analytics as overlay networks of devices clones. By means of virtual network embedding, it selects and interconnects devices’ clones to efficiently realize applications’ virtual topologies to achieve goals such as minimum latency, minimum infrastructure cost, or maximum infrastructure utilization.Finally, the dissertation presents a communication middleware that allows autonomous discovery, self-deployment, and online migration of devices’ clones across heterogeneous Edge computing platforms. The middleware ensures that communication latency between clones is kept minimum despite the uncontrolled variability of the network and hosting platforms conditions.We evaluate the proposed architectures and algorithms through simulations and prototypeimplementation of various components in controlled testbed environments, which we evaluateusing real user applications. We explore the feasibility of the proposed techniques from boththeoretical and practical perspectives.Keywords: Cloud Computing, Internet of Things, Algorithmic Game Theory, Compressive Sensin
Global evaluation of CDNs performance using PlanetLab
Since they were introduced in the market, Content Distribution Networks
(CDNs) have been increasing their importance due to the “instantaneity”
requirements pretended by nowadays web users.
Thanks to the increment in the access speed, especially in the last mile with
technologies such as xDSL, HFC, FTTH, the loading time has been reduced.
However the “instantaneity” those users want could not be obtained without
techniques such as caches and content distribution due to CDNs. These
techniques aim to avoid fetching web objects from origin web server, especially
in “heavy” objects such as multimedia files.
CDN provides not only a clever way of distributing content in a globally, but
also preventing problems such as the “flash crowd events”. This kind of
situation could provoke huge monetary losses because it attacks the bottleneck
introduced by clustering servers to reach scalability.
The CDN leader provider is Akamai, and one of the most important decisions a
CDN should perform is deciding witch of the available servers is the best one a
user could use to be able to fetch a specific web object. This best server
selection employs a technique based on DNS with the objective of mapping the
IP address with the best available server in terms of latency.
The current project presents a global performance of Akamai server selection
technique using tools such as PlanetLab and Httperf. Different tests were done
with the objective of comparing the results of the global distributed users to
identify those areas where Akamai perform in a suitable way. To determinate
this, the results obtained with Akamai were also compared with a non-CDN
distribution web page. Finally a linear correlation between the latencies
measured and the number of hops was identified.Castellà: Desde que fueron introducidas en el mercado las Redes de Distribución de
Contenidos (CDN) ha incrementado su importancia debido a la tendencia de
“instantaneidad” en la carga de las páginas web que actualmente pretenden
los usuarios de Internet.
Gracias al incremento en las velocidades de acceso sobretodo en la última
milla con tecnologías como xDSL, HFC, FTTH, la velocidad de carga de las
páginas webs se ha incrementado. Sin embargo esta “instantaneidad” ha sido
posible gracias a diferentes técnicas como la utilización de caches y
distribución de contenidos vía CDN. Estas técnicas tienen como objetivo evitar
que la carga de los objetos web más “pesados” (como pueden ser los archivos
multimedia) se haga desde el servidor origen.
Las CDN proporcionan no sólo una forma efectiva de distribuir los contenidos
de una manera global sino que también resuelven problemas como los “flash
crowd events” que pueden llegar a ocasionar enormes perdidas monetarias
debido a la inoperatividad que generan en la web origen.
Uno de los proveedores más importantes de CDNs es Akamai y una de las
decisiones más importantes que una CDN debe realizar es seleccionar el
mejor servidor disponible en cierto instante de tiempo, para que un usuario
pueda acceder al objeto web deseado. Para esto se utilizan técnicas basadas
en DNS con el objetivo de “mappear” la dirección IP del servidor que presente
mejor latencia.
Este proyecto presenta una evaluación de performance, sobre la técnica de
selección del mejor servidor que utiliza Akamai. Su comportamiento es
evaluando de manera global gracias a la utilización de herramientas como
PlanetLab y Httperf. En el mismo, se realizan diferentes pruebas que hacen
hincapié en comparar los resultados desde puntos ubicados en diferentes
zonas del planeta para así poder concluir en que zonas Akamai tiene mejor
respuesta. Para ello se compararon los resultados obtenidos con una web que
utiliza la CDN de Akamai con otra que no utiliza distribución de contenidos a
través de CDN. Finalmente se trata de identificar una correlación entre las
respuestas de latencia y cantidad de “hops”
- …