56 research outputs found

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    Reducing short flows' latency in the internet

    Get PDF
    Short flows are highly valuable in the modern Internet and are widely used by applications in the form of web requests or with user interactions. These kinds of applications are extremely sensitive to latency. A small additional delay, like one or two round trip times (RTTs), may easily cause user frustration and lose usability of services. In the most desirable scenario, we want to finish these kinds of flows in one network RTT. Furthermore, we would like the network's RTT to be as close as possible to the speed of light. Unfortunately, in the current Internet, there are many unnecessary delays caused by different kinds of policies--in particular, transmission protocol and routing policies--driving us far away from this goal. This thesis aims at answering the following two questions: How can we optimize the transmission protocol to reduce short flows' latency as close as possible to one RTT and why are network RTTs still significantly larger than the speed-of-light latency? To reduce the transmission latency, we focused on the two main components of short flows, connection establishment and data transmission. ASAP, a new naming and transport protocol, is introduced to reduce the time spent on initial TCP connections. It merges functionality of DNS and TCP's connection establishment functions by piggybacking the connection establishment procedure atop the DNS lookup process. With the help of ASAP, the host is able to save up to two-thirds of the time spent on initial connection without exposing significant DoS vulnerabilities. For data transmission, we designed a new control rate mechanism, Halfback, which achieves low latency with limited bandwidth overhead and only requires sender-side changes. Halfback has an aggressive startup phase, finishing transmission for most short flows in one RTT, together with a Reverse-Ordering Proactively Retransmission phase which helps the host to recovery quickly from packet loss caused by the aggressive startup phase. Halfback is able to achieve 56% smaller flow completion time on average and three times smaller in the 99th percentile. RTT between two hosts is able to be more than 6 times the speed-of-light latency for Directed Optical Fiber. To understand the composition of RTT inflation, we break down the path inflation on the end-to-end path into its contribution factors. Based on our result, 7.2% is caused by network topology, 18.8% is contributed by inter-domain routing policies, 54.9% is caused by peering policies, and 25.6% is caused by intra-domain routing policies. This result shows that the main component of the path inflation is caused by peering policies which may require more attention for future research. Besides this, we also analyze the changes of the inflation caused by each contributing factor across five years. According to our analysis, the total inflation has been reduced by around 6% each year since 2010

    Analyzing Data-center Application Performance Via Constraint-based Models

    Get PDF
    Hyperscale Data Centers (HDCs) are the largest distributed computing machines ever constructed. They serve as the backbone for many popular applications, such as YouTube, Netflix, Meta, and Airbnb, which involve millions of users and generate billions in revenue. As the networking infrastructure plays a pivotal role in determining the performance of HDC applications, understanding and optimizing their networking performance is critical. This thesis proposes and evaluates a constraint-based approach to characterize the networking performance of HDC applications. Through extensive evaluations conducted in both controlled settings and real-world case studies within a production HDC, I demonstrated the effectiveness of the constraint-based approach in handling the immense volume of performance data in HDCs, achieving tremendous dimension reduction, and providing very useful interpretability.Doctor of Philosoph

    An infrastructure for neural network construction

    Get PDF
    After many years of research the area of Artificial Intelligence is still searching for ways to construct a truly intelligent system. One criticism is that current models are not 'rich' or complex enough to operate in many and varied real world situations. One way to tackle this criticism is to look at intelligent systems that already exist in nature and examine these to determine what complexities exist in these systems and not in the current Al models. The research begins by presenting an overview of the current knowledge of Biological Neural Networks, as examples of intelligent systems existing in nature, and how they function. Artificial Neural networks are then discussed and the thesis examines their similarities and dissimilarities with their biological counterparts. The research suggests ways that Artificial Neural Networks may be improved by borrowing ideas from Biological Neural Networks. By introducing new concepts drawn from the biological realm, the construction of the Artificial Neural Networks becomes more difficult. To solve this difficulty, the thesis introduces the area of Evolutionary Algorithms as a way of constructing Artificial Neural Networks. An intellectual infrastructure is developed that incorporates concepts from Biological Neural Networks into current models of Artificial Neural Networks and two models are developed to explore the concept that increased complexity can indeed add value to the current models of Artificial Neural Networks. The outcome of the thesis shows that increased complexity can have benefits in terms of learning speed of an Artificial Neural Network and in terms of robustness to damage.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Scalable event-driven modelling architectures for neuromimetic hardware

    Get PDF
    Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Dedicated hardware may thus be more suitable for executing them. Given that there is no clear consensus on the model of computation in the brain, model flexibility is at least as important a characteristic of neural hardware as is performance acceleration. The SpiNNaker chip is an example of the emerging 'neuromimetic' architecture, a universal platform that specialises the hardware for neural networks but allows flexibility in model choice. It integrates four key attributes: native parallelism, event-driven processing, incoherent memory and incremental reconfiguration, in a system combining an array of general-purpose processors with a configurable asynchronous interconnect. Making such a device usable in practice requires an environment for instantiating neural models on the chip that allows the user to focus on model characteristics rather than on hardware details. The central part of this system is a library of predesigned, 'drop-in' event-driven neural components that specify their specific implementation on SpiNNaker. Three exemplar models: two spiking networks and a multilayer perceptron network, illustrate techniques that provide a basis for the library and demonstrate a reference methodology that can be extended to support third-party library components not only on SpiNNaker but on any configurable neuromimetic platform. Experiments demonstrate the capability of the library model to implement efficient on-chip neural networks, but also reveal important hardware limitations, particularly with respect to communications, that require careful design. The ultimate goal is the creation of a library-based development system that allows neural modellers to work in the high-level environment of their choice, using an automated tool chain to create the appropriate SpiNNaker instantiation. Such a system would enable the use of the hardware to explore abstractions of biological neurodynamics that underpin a functional model of neural computation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Human Brain/Cloud Interface

    Get PDF
    The Internet comprises a decentralized global system that serves humanity’s collective effort to generate, process, and store data, most of which is handled by the rapidly expanding cloud. A stable, secure, real-time system may allow for interfacing the cloud with the human brain. One promising strategy for enabling such a system, denoted here as a “human brain/cloud interface” (“B/CI”), would be based on technologies referred to here as “neuralnanorobotics.” Future neuralnanorobotics technologies are anticipated to facilitate accurate diagnoses and eventual cures for the ∼400 conditions that affect the human brain. Neuralnanorobotics may also enable a B/CI with controlled connectivity between neural activity and external data storage and processing, via the direct monitoring of the brain’s ∼86 × 109 neurons and ∼2 × 1014 synapses. Subsequent to navigating the human vasculature, three species of neuralnanorobots (endoneurobots, gliabots, and synaptobots) could traverse the blood–brain barrier (BBB), enter the brain parenchyma, ingress into individual human brain cells, and autoposition themselves at the axon initial segments of neurons (endoneurobots), within glial cells (gliabots), and in intimate proximity to synapses (synaptobots). They would then wirelessly transmit up to ∼6 × 1016 bits per second of synaptically processed and encoded human–brain electrical information via auxiliary nanorobotic fiber optics (30 cm3) with the capacity to handle up to 1018 bits/sec and provide rapid data transfer to a cloud based supercomputer for real-time brain-state monitoring and data extraction. A neuralnanorobotically enabled human B/CI might serve as a personalized conduit, allowing persons to obtain direct, instantaneous access to virtually any facet of cumulative human knowledge. Other anticipated applications include myriad opportunities to improve education, intelligence, entertainment, traveling, and other interactive experiences. A specialized application might be the capacity to engage in fully immersive experiential/sensory experiences, including what is referred to here as “transparent shadowing” (TS). Through TS, individuals might experience episodic segments of the lives of other willing participants (locally or remote) to, hopefully, encourage and inspire improved understanding and tolerance among all members of the human family
    corecore