58 research outputs found

    Diseño centrado en calidad para la difusión Peer-to-Peer de video en vivo

    Get PDF
    El uso de redes Peer-to-Peer (P2P) es una forma escalable para ofrecer servicios de video sobre Internet. Este documento hace foco en la definición, desarrollo y evaluación de una arquitectura P2P para distribuir video en vivo. El diseño global de la red es guiado por la calidad de experiencia (Quality of Experience - QoE), cuyo principal componente en este caso es la calidad del video percibida por los usuarios finales, en lugar del tradicional diseño basado en la calidad de servicio (Quality of Service - QoE) de la mayoría de los sistemas. Para medir la calidad percibida del video, en tiempo real y automáticamente, extendimos la recientemente propuesta metodología Pseudo-Subjective Quality Assessment (PSQA). Dos grandes líneas de investigación son desarrolladas. Primero, proponemos una técnica de distribución de video desde múltiples fuentes con las características de poder ser optimizada para maximizar la calidad percibida en contextos de muchas fallas y de poseer muy baja señalización (a diferencia de los sistemas existentes). Desarrollamos una metodología, basada en PSQA, que nos permite un control fino sobre la forma en que la señal de video es dividida en partes y la cantidad de redundancia agregada, como una función de la dinámica de los usuarios de la red. De esta forma es posible mejorar la robustez del sistema tanto como sea deseado, contemplando el límite de capacidad en la comunicación. En segundo lugar, presentamos un mecanismo estructurado para controlar la topología de la red. La selección de que usuarios servirán a que otros es importante para la robustez de la red, especialmente cuando los usuarios son heterogéneos en sus capacidades y en sus tiempos de conexión.Nuestro diseño maximiza la calidad global esperada (evaluada usando PSQA), seleccionado una topología que mejora la robustez del sistema. Además estudiamos como extender la red con dos servicios complementarios: el video bajo demanda (Video on Demand - VoD) y el servicio MyTV. El desafío en estos servicios es como realizar búsquedas eficientes sobre la librería de videos, dado al alto dinamismo del contenido. Presentamos una estrategia de "caching" para las búsquedas en estos servicios, que maximiza el número total de respuestas correctas a las consultas, considerando una dinámica particular en los contenidos y restricciones de ancho de banda. Nuestro diseño global considera escenarios reales, donde los casos de prueba y los parámetros de configuración surgen de datos reales de un servicio de referencia en producción. Nuestro prototipo es completamente funcional, de uso gratuito, y basado en tecnologías bien probadas de código abierto

    Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing

    Get PDF
    The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis

    The DUNE Far Detector Interim Design Report Volume 1: Physics, Technology & Strategies Deep Underground Neutrino Experiment (DUNE)

    Get PDF
    The Deep Underground Neutrino Experiment (DUNE) will be a world-class neutrino observatory and nucleon decay detector designed to answer fundamental questions about the nature of elemen tary particles and their role in the universe. The international DUNE experiment, hosted by the U.S. Department of Energy’s Fermilab, will consist of a far detector to be located about 1.5 km underground at the Sanford Underground Research Facility (SURF) in South Dakota, USA, at a distance of 1300 km from Fermilab, and a near detector to be located at Fermilab in Illinois. The far detector will be a very large, modular liquid argon time-projection chamber (LArTPC) with a 40 kt (40 Gg) fiducial mass. This LAr technology will make it possible to reconstruct neutrino interactions with image-like precision and unprecedented resolution

    Improving Network Performance Through Endpoint Diagnosis And Multipath Communications

    Get PDF
    Components of networks, and by extension the internet can fail. It is, therefore, important to find the points of failure and resolve existing issues as quickly as possible. Resolution, however, takes time and its important to maintain high quality of service (QoS) for existing clients while it is in progress. In this work, our goal is to provide clients with means of avoiding failures if/when possible to maintain high QoS while enabling them to assist in the diagnosis process to speed up the time to recovery. Fixing failures relies on first detecting that there is one and then identifying where it occurred so as to be able to remedy it. We take a two-step approach in our solution. First, we identify the entity (Client, Server, Network) responsible for the failure. Next, if a failure is identified as network related additional algorithms are triggered to detect the device responsible. To achieve the first step, we revisit the question: how much can you infer about a failure using TCP statistics collected at one of the endpoints in a connection? Using an agent that captures TCP statistics at one of the end points we devise a classification algorithm that identifies the root cause of failures. Using insights derived from this classification algorithm we identify dominant TCP metrics that indicate where/why problems occur. If/when a failure is identified as a network related problem, the second step is triggered, where the algorithm uses additional information that is collected from ``failed\u27\u27 connections to identify the device which resulted in the failure. Failures are also disruptive to user\u27s performance. Resolution may take time. Therefore, it is important to be able to shield clients from their effects as much as possible. One option for avoiding problems resulting from failures is to rely on multiple paths (they are unlikely to go bad at the same time). The use of multiple paths involves both selecting paths (routing) and using them effectively. The second part of this thesis explores the efficacy of multipath communication in such situations. It is expected that multi-path communications have monetary implications for the ISP\u27s and content providers. Our solution, therefore, aims to minimize such costs to the content providers while significantly improving user performance

    Informing protocol design through crowdsourcing measurements

    Get PDF
    Mención Internacional en el título de doctorMiddleboxes, such as proxies, firewalls and NATs play an important role in the modern Internet ecosystem. On one hand, they perform advanced functions, e.g. traffic shaping, security or enhancing application performance. On the other hand, they turn the Internet into a hostile ecosystem for innovation, as they limit the deviation from deployed protocols. It is therefore essential, when designing a new protocol, to first understand its interaction with the elements of the path. The emerging area of crowdsourcing solutions can help to shed light on this issue. Such approach allows us to reach large and different sets of users and also different types of devices and networks to perform Internet measurements. In this thesis, we show how to make informed protocol design choices by expanding the traditional crowdsourcing focus from the human element and using crowdsourcing large scale measurement platforms. We consider specific use cases, namely the case of pervasive encryption in the modern Internet, TCP Fast Open and ECN++. We consider such use cases to advance the global understanding on whether wide adoption of encryption is possible in today’s Internet or the adoption of encryption is necessary to guarantee the proper functioning of HTTP/2. We target ECN and particularly ECN++, given its succession of deployment problems. We then measured ECN deployment over mobile as well as fixed networks. In the process, we discovered some bad news for the base ECN protocol—more than half the mobile carriers we tested wipe the ECN field at the first upstream hop. This thesis also reports the good news that, wherever ECN gets through, we found no deployment problems for the ECN++ enhancement. The thesis includes the results of other more in-depth tests to check whether servers that claim to support ECN, actually respond correctly to explicit congestion feedback, including some surprising congestion behaviour unrelated to ECN. This thesis also explores the possible causes that ossify the modern Internet and make difficult the advancement of the innovation. Network Address Translators (NATs) are a commonplace in the Internet nowadays. It is fair to say that most of the residential and mobile users are connected to the Internet through one or more NATs. As any other technology, NAT presents upsides and downsides. Probably the most acknowledged downside of the NAT technology is that it introduces additional difficulties for some applications such as peer-to-peer applications, gaming and others to function properly. This is partially due to the nature of the NAT technology but also due to the diversity of behaviors of the different NAT implementations deployed in the Internet. Understanding the properties of the currently deployed NAT base provides useful input for application and protocol developers regarding what to expect when deploying new application in the Internet. We develop NATwatcher, a tool to test NAT boxes using a crowdsourcingbased measurement methodology. We also perform large scale active measurement campaigns to detect CGNs in fixed broadband networks using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We deploy Revelio in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas testbed. A part of this thesis focuses on characterizing CGNs in Mobile Network Operators (MNOs). We develop a measuring tool, called CGNWatcher that executes a number of active tests to fully characterize CGN deployments in MNOs. The CGNWatcher tool systematically tests more than 30 behavioural requirements of NATs defined by the Internet Engineering Task Force (IETF) and also multiple CGN behavioural metrics. We deploy CGNWatcher in MONROE and performed large measurement campaigns to characterize the real CGN deployments of the MNOs serving the MONROE nodes. We perform a large measurement campaign using the tools described above, recruiting over 6,000 users, from 65 different countries and over 280 ISPs. We validate our results with the ISPs at the IP level and, reported to the ground truth we collected. To the best of our knowledge, this represents the largest active measurement study of (confirmed) NAT or CGN deployments at the IP level in fixed and mobile networks to date. As part of the thesis, we characterize roaming across Europe. The goal of the experiment was to try to understand if the MNO changes CGN while roaming, for this reason, we run a series of measurements that enable us to identify the roaming setup, infer the network configuration for the 16 MNOs that we measure and quantify the end-user performance for the roaming configurations which we detect. We build a unique roaming measurement platform deployed in six countries across Europe. Using this platform, we measure different aspects of international roaming in 3G and 4G networks, including mobile network configuration, performance characteristics, and content discrimination. We find that operators adopt common approaches to implementing roaming, resulting in additional latency penalties of 60 ms or more, depending on geographical distance. Considering content accessibility, roaming poses additional constraints that leads to only minimal deviations when accessing content in the original country. However, geographical restrictions in the visited country make the picture more complicated and less intuitive. Results included in this thesis would provide useful input for application, protocol designers, ISPs and researchers that aim to make their applications and protocols to work across the modern Internet.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Gonzalo Camarillo González.- Secretario: María Carmen Guerrero López.- Vocal: Andrés García Saavedr

    Identifying and diagnosing video streaming performance issues

    Get PDF
    On-line video streaming is an ever evolving ecosystem of services and technologies, where content providers are on a constant race to satisfy the users' demand for richer content and higher bitrate streams, updated set of features and cross-platform compatibility. At the same time, network operators are required to ensure that the requested video streams are delivered through the network with a satisfactory quality in accordance with the existing Service Level Agreements (SLA). However, tracking and maintaining satisfactory video Quality of Experience (QoE) has become a greater challenge for operators than ever before. With the growing popularity of content engagement on handheld devices and over wireless connections, new points-of-failure have added to the list of failures that can affect the video quality. Moreover, the adoption of end-to-end encryption by major streaming services has rendered previously used QoE diagnosis methods obsolete. In this thesis, we identify the current challenges in identifying and diagnosing video streaming issues and we propose novel approaches in order to address them. More specifically, the thesis initially presents methods and tools to identify a wide array of QoE problems and the severity with which they affect the users' experience. The next part of the thesis deals with the investigation of methods to locate under-performing parts of the network that lead to drop of the delivered quality of a service. In this context, we propose a data-driven methodology for detecting the under performing areas of cellular network with sub-optimal Quality of Service (QoS) and video QoE. Moreover, we develop and evaluate a multi-vantage point framework that is capable of diagnosing the underlying faults that cause the disruption of the user's experience. The last part of this work, further explores the detection of network performance anomalies and introduces a novel method for detecting such issues using contextual information. This approach provides higher accuracy when detecting network faults in the presence of high variation and can benefit providers to perform early detection of anomalies before they result in QoE issues.La distribución de vídeo online es un ecosistema de servicios y tecnologías, donde los proveedores de contenidos se encuentran en una carrera continua para satisfacer las demandas crecientes de los usuarios de más riqueza de contenido, velocidad de transmisión, funcionalidad y compatibilidad entre diferentes plataformas. Asimismo, los operadores de red deben asegurar que los contenidos demandados son entregados a través de la red con una calidad satisfactoria según los acuerdos existentes de nivel de servicio (en inglés Service Level Agreement o SLA). Sin embargo, la monitorización y el mantenimiento de un nivel satisfactorio de la calidad de experiencia (en inglés Quality of Experience o QoE) del vídeo online se ha convertido en un reto mayor que nunca para los operadores. Dada la creciente popularidad del consumo de contenido con dispositivos móviles y a través de redes inalámbricas, han aparecido nuevos puntos de fallo que se han añadido a la lista de problemas que pueden afectar a la calidad del vídeo transmitido. Adicionalmente, la adopción de sistemas de encriptación extremo a extremo, por parte de los servicios más importantes de distribución de vídeo online, ha dejado obsoletos los métodos existentes de diagnóstico de la QoE. En esta tesis se identifican los retos actuales en la identificación y diagnóstico de los problemas de transmisión de vídeo online, y se proponen nuevas soluciones para abordar estos problemas. Más concretamente, inicialmente la tesis presenta métodos y herramientas para identificar un conjunto amplio de problemas de QoE y la severidad con los que estos afectan a la experiencia de los usuarios. La siguiente parte de la tesis investiga métodos para localizar partes de la red con un rendimiento bajo que resultan en una disminución de la calidad del servicio ofrecido. En este contexto, se propone una metodología basada en el análisis de datos para detectar áreas de la red móvil que ofrecen un nivel subóptimo de calidad de servicio (en inglés Quality of Service o QoS) y QoE. Además, se desarrolla y se evalúa una solución basada en múltiples puntos de medida que es capaz de diagnosticar los problemas subyacentes que causan la alteración de la experiencia de usuario. La última parte de este trabajo explora adicionalmente la detección de anomalías de rendimiento de la red y presenta un nuevo método para detectar estas situaciones utilizando información contextual. Este enfoque proporciona una mayor precisión en la detección de fallos de la red en presencia de alta variabilidad y puede ayudar a los proveedores a la detección precoz de anomalías antes de que se conviertan en problemas de QoE.La distribució de vídeo online és un ecosistema de serveis i tecnologies, on els proveïdors de continguts es troben en una cursa continua per satisfer les demandes creixents del usuaris de més riquesa de contingut, velocitat de transmissió, funcionalitat i compatibilitat entre diferents plataformes. A la vegada, els operadors de xarxa han d’assegurar que els continguts demandats són entregats a través de la xarxa amb una qualitat satisfactòria segons els acords existents de nivell de servei (en anglès Service Level Agreement o SLA). Tanmateix, el monitoratge i el manteniment d’un nivell satisfactori de la qualitat d’experiència (en anglès Quality of Experience o QoE) del vídeo online ha esdevingut un repte més gran que mai per als operadors. Donada la creixent popularitat del consum de contingut amb dispositius mòbils i a través de xarxes sense fils, han aparegut nous punts de fallada que s’han afegit a la llista de problemes que poden afectar a la qualitat del vídeo transmès. Addicionalment, l’adopció de sistemes d’encriptació extrem a extrem, per part dels serveis més importants de distribució de vídeo online, ha deixat obsolets els mètodes existents de diagnòstic de la QoE. En aquesta tesi s’identifiquen els reptes actuals en la identificació i diagnòstic dels problemes de transmissió de vídeo online, i es proposen noves solucions per abordar aquests problemes. Més concretament, inicialment la tesi presenta mètodes i eines per identificar un conjunt ampli de problemes de QoE i la severitat amb la que aquests afecten a la experiència dels usuaris. La següent part de la tesi investiga mètodes per localitzar parts de la xarxa amb un rendiment baix que resulten en una disminució de la qualitat del servei ofert. En aquest context es proposa una metodologia basada en l’anàlisi de dades per detectar àrees de la xarxa mòbil que ofereixen un nivell subòptim de qualitat de servei (en anglès Quality of Service o QoS) i QoE. A més, es desenvolupa i s’avalua una solució basada en múltiples punts de mesura que és capaç de diagnosticar els problemes subjacents que causen l’alteració de l’experiència d’usuari. L’última part d’aquest treball explora addicionalment la detecció d’anomalies de rendiment de la xarxa i presenta un nou mètode per detectar aquestes situacions utilitzant informació contextual. Aquest enfoc proporciona una major precisió en la detecció de fallades de la xarxa en presencia d’alta variabilitat i pot ajudar als proveïdors a la detecció precoç d’anomalies abans de que es converteixin en problemes de QoE.Postprint (published version

    The DUNE Far Detector Interim Design Report Volume 1: Physics, Technology and Strategies

    Get PDF
    The DUNE IDR describes the proposed physics program and technical designs of the DUNE Far Detector modules in preparation for the full TDR to be published in 2019. It is intended as an intermediate milestone on the path to a full TDR, justifying the technical choices that flow down from the high-level physics goals through requirements at all levels of the Project. These design choices will enable the DUNE experiment to make the ground-breaking discoveries that will help to answer fundamental physics questions. Volume 1 contains an executive summary that describes the general aims of this document. The remainder of this first volume provides a more detailed description of the DUNE physics program that drives the choice of detector technologies. It also includes concise outlines of two overarching systems that have not yet evolved to consortium structures: computing and calibration. Volumes 2 and 3 of this IDR describe, for the single-phase and dual-phase technologies, respectively, each detector module's subsystems, the technical coordination required for its design, construction, installation, and integration, and its organizational structure

    The DUNE Far Detector Interim Design Report Volume 1:Physics, Technology and Strategies

    Get PDF
    The DUNE IDR describes the proposed physics program and technical designs of the DUNE Far Detector modules in preparation for the full TDR to be published in 2019. It is intended as an intermediate milestone on the path to a full TDR, justifying the technical choices that flow down from the high-level physics goals through requirements at all levels of the Project. These design choices will enable the DUNE experiment to make the ground-breaking discoveries that will help to answer fundamental physics questions. Volume 1 contains an executive summary that describes the general aims of this document. The remainder of this first volume provides a more detailed description of the DUNE physics program that drives the choice of detector technologies. It also includes concise outlines of two overarching systems that have not yet evolved to consortium structures: computing and calibration. Volumes 2 and 3 of this IDR describe, for the single-phase and dual-phase technologies, respectively, each detector module's subsystems, the technical coordination required for its design, construction, installation, and integration, and its organizational structure
    • …
    corecore