100 research outputs found

    Provider-Controlled Bandwidth Management for HTTP-based Video Delivery

    Get PDF
    Over the past few years, a revolution in video delivery technology has taken place as mobile viewers and over-the-top (OTT) distribution paradigms have significantly changed the landscape of video delivery services. For decades, high quality video was only available in the home via linear television or physical media. Though Web-based services brought video to desktop and laptop computers, the dominance of proprietary delivery protocols and codecs inhibited research efforts. The recent emergence of HTTP adaptive streaming protocols has prompted a re-evaluation of legacy video delivery paradigms and introduced new questions as to the scalability and manageability of OTT video delivery. This dissertation addresses the question of how to enable for content and network service providers the ability to monitor and manage large numbers of HTTP adaptive streaming clients in an OTT environment. Our early work focused on demonstrating the viability of server-side pacing schemes to produce an HTTP-based streaming server. We also investigated the ability of client-side pacing schemes to work with both commodity HTTP servers and our HTTP streaming server. Continuing our client-side pacing research, we developed our own client-side data proxy architecture which was implemented on a variety of mobile devices and operating systems. We used the portable client architecture as a platform for investigating different rate adaptation schemes and algorithms. We then concentrated on evaluating the network impact of multiple adaptive bitrate clients competing for limited network resources, and developing schemes for enforcing fair access to network resources. The main contribution of this dissertation is the definition of segment-level client and network techniques for enforcing class of service (CoS) differentiation between OTT HTTP adaptive streaming clients. We developed a segment-level network proxy architecture which works transparently with adaptive bitrate clients through the use of segment replacement. We also defined a segment-level rate adaptation algorithm which uses download aborts to enforce CoS differentiation across distributed independent clients. The segment-level abstraction more accurately models application-network interactions and highlights the difference between segment-level and packet-level time scales. Our segment-level CoS enforcement techniques provide a foundation for creating scalable managed OTT video delivery services

    PEER-TO-PEER VIDEO CONTENT DELIVERY OPTIMIZATION SERVICE IN A DISTRIBUTED NETWORK

    Get PDF
    Η δυναμικά προσαρμοζόμενη ροή βίντεο μέσω HTTP (DASH) παρέχει βελτιώσεις στην ποιότητα της εμπειρίας χρήσης (QoE) κατά την αναπαραγωγή βίντεο σε δίκτυα παλαιότερα των δικτύων 5ης γενιάς (5G). Ωστόσο, οι εφαρμογές τύπου νέφους τις οποίες μπορεί να παρέχει η αρχιτεκτονική δικτύων 5ης γενιάς, σε συνδυασμό με την υλοποίηση υπολογιστικών υποδομών νέφους στο άκρο του δικτύου και κοντά στους τελικούς χρήστες, μπορεί να βελτιώσει σημαντικά τόσο την ποιότητα της προσφερόμενης υπηρεσίας (QoS) όσο και την εμπειρία χρήσης λόγω της δυνατότητας προσωρινής αποθήκευσης περιεχομένου βίντεο στο άκρο του δικτύου, λόγω της δυνατότητας παροχής προσωρινής αποθήκευσης μέρους του βίντεο στο άκρο του δικτύου. Επιπροσθέτως, εκτός της αποθήκευσης στο και διανομής βίντεο από το άκρο του δικτύου προς τους τελικούς χρήστες, οι νέες υποδομές βίντεο θα παρέχουν τη δυνατότητα διανομής περιεχομένου βίντεο απευθείας από συσκευή σε συσκευή (D2D). Αξιοποιώντας τις τεχνολογίες αυτές, μπορούν να υλοποιηθούν καινοτόμες υπηρεσίες ροής βίντεο, οι οποίες μπορούν όχι μόνο να βελτιώσουν την εμπειρία χρήσης των τελικών χρηστών κατά την αναπαραγωγή βίντεο, αλλά και να μειώσουν το συνολικό κόστος διανομής βίντεο καθώς και την συμφόρηση των δικτύων, άρα και την καθυστέρηση από άκρο σε άκρο και τη συμφόρηση στα δίκτυα διανομής περιεχομένου (CDN) των παρόχων υπηρεσιών διανομής και ροής βίντεο. Στην παρούσα διπλωματική εργασία μελετούμε την επίπτωση που έχουν διάφοροι συνδυασμοί τεχνικών προσωρινής αποθήκευσης, διανομής, καθώς και επιλογής ανάλυσης, σε περιεχόμενο βίντεο, πάνω στην ποιότητα της προσφερόμενης υπηρεσίας και στην εμπειρία των τελικών χρηστών που βρίσκονται στο άκρο του δικτύου, οι οποίες μπορούν να αξιοποιηθούν στη δημιουργία μιας καινοτόμας υπηρεσίας που βελτιστοποιεί τη διανομή περιεχομένου βίντεο μεταξύ ομότιμων κόμβων (P2P) σε ένα κατανεμημένο δίκτυο.Dynamtic Adaptive Streaming over HTTP (DASH) has yield several improvements in the video playback Quality of Experimence (QoE) for the end users in pre-fifth generation (5G) networks. However, cloud applications that 5G networks enable, combined with cloud infrastructures at the edge of the network and in close vicinity to the end users, can offer significant improvements in both the offered Quality of Service (QoS) and QoE because of the video content caching capabilities at the edge of the network that the edge cloud can offer. Furthermore, in addition to edge caching and edge video streaming to the end users, new video infrastructures can offer Device-to-Device (D2D) video content exchange and delivery. Taking advantage of these technologies, innovative video streaming services can be developed which not only improve the video playback QoE for the end users but also reduce the video delivery costs and generated network traffic, which also means reduced end-to-end latency and reduced overhead in video content providers’ Content Delivery Network (CDN). In this thesis we study the impact of using different combinations of distinct video caching techniques, video segment request and streaming algorithms and video resolution selection logics on the QoS and the QoE of end users at the network edge, which can be used in developing an innovative Peer-to-Peer (P2P) video content delivery optimization service in a distributed network

    Adaptive Resource Management Schemes for Web Services

    Get PDF
    Web cluster systems provide cost-effective solutions when scalable and reliable web services are required. However, as the number of servers in web cluster systems increase, web cluster systems incur long and unpredictable delays to manage servers. This study presents the efficient management schemes for web cluster systems. First of all, we propose an efficient request distribution scheme in web cluster systems. Distributor-based systems forward user requests to a balanced set of waiting servers in complete transparency to the users. The policy employed in forwarding requests from the frontend distributor to the backend servers plays an important role in the overall system performance. In this study, we present a proactive request distribution (ProRD) to provide an intelligent distribution at the distributor. Second, we propose the heuristic memory management schemes through a web prefetching scheme. For this study, we design a Double Prediction-by-Partial-Match Scheme (DPS) that can be adapted to the modern web frameworks. In addition, we present an Adaptive Rate Controller (ARC) to determine the prefetch rate depending on the memory status dynamically. For evaluating the prefetch gain in a server node, we implement an Apache module. Lastly, we design an adaptive web streaming system in wireless networks. The rapid growth of new wireless and mobile devices accessing the internet has contributed to a whole new level of heterogeneity in web streaming systems. Particularly, in-home networks have also increased in heterogeneity by using various devices such as laptops, cell phone and PDAs. In our study, a set-top box(STB) is the access pointer between the internet and a home network. We design an ActiveSTB which has a capability of buffering and quality adaptation based on the estimation for the available bandwidth in the wireless LAN

    Chapter Measuring Energy

    Get PDF
    Data centres are part of today's critical information and communication infrastructure, and the majority of business transactions as well as much of our digital life now depend on them. At the same time, data centres are large primary energy consumers, with energy consumed by IT and server room air conditioning equipment and also by general building facilities. In many data centres, IT equipment energy and cooling energy requirements are not always coordinated, so energy consumption is not optimised. Most data centres lack an integrated energy management system that jointly optimises and controls all its energy consuming equipments in order to reduce energy consumption and increase the usage of local renewable energy sources. In this chapter, the authors discuss the challenges of coordinated energy management in data centres and present a novel scalable, integrated energy management system architecture for data centre wide optimisation. A prototype of the system has been implemented, including joint workload and thermal management algorithms. The control algorithms are evaluated in an accurate simulation‐based model of a real data centre. Results show significant energy savings potential, in some cases up to 40%, by integrating workload and thermal management

    Advancing Operating Systems via Aspect-Oriented Programming

    Get PDF
    Operating system kernels are among the most complex pieces of software in existence to- day. Maintaining the kernel code and developing new functionality is increasingly compli- cated, since the amount of required features has risen significantly, leading to side ef fects that can be introduced inadvertedly by changing a piece of code that belongs to a completely dif ferent context. Software developers try to modularize their code base into separate functional units. Some of the functionality or “concerns” required in a kernel, however, does not fit into the given modularization structure; this code may then be spread over the code base and its implementation tangled with code implementing dif ferent concerns. These so-called “crosscutting concerns” are especially dif ficult to handle since a change in a crosscutting concern implies that all relevant locations spread throughout the code base have to be modified. Aspect-Oriented Software Development (AOSD) is an approach to handle crosscutting concerns by factoring them out into separate modules. The “advice” code contained in these modules is woven into the original code base according to a pointcut description, a set of interaction points (joinpoints) with the code base. To be used in operating systems, AOSD requires tool support for the prevalent procedu- ral programming style as well as support for weaving aspects. Many interactions in kernel code are dynamic, so in order to implement non-static behavior and improve performance, a dynamic weaver that deploys and undeploys aspects at system runtime is required. This thesis presents an extension of the “C” programming language to support AOSD. Based on this, two dynamic weaving toolkits – TOSKANA and TOSKANA-VM – are presented to permit dynamic aspect weaving in the monolithic NetBSD kernel as well as in a virtual- machine and microkernel-based Linux kernel running on top of L4. Based on TOSKANA, applications for this dynamic aspect technology are discussed and evaluated. The thesis closes with a view on an aspect-oriented kernel structure that maintains coherency and handles crosscutting concerns using dynamic aspects while enhancing de- velopment methods through the use of domain-specific programming languages

    Proceedings of the 8th Python in Science conference

    Get PDF
    International audienceThe SciPy conference provides a unique opportunity to learn and affect what is happening in the realm of scientific computing with Python. Attendees have the opportunity to review the available tools and how they apply to specific problems. By providing a forum for developers to share their Python expertise with the wider commercial, academic, and research communities, this conference fosters collaboration and facilitates the sharing of software components, techniques and a vision for high level language use in scientific computing

    IO-Lite: a unified I/O buffering and caching system

    Get PDF
    This article presents the design, implementation, and evaluation of IO -Lite, a unified I/O buffering and caching system for general-purpose operating systems. IO-Lite unifies all buffering and caching in the system, to the extent permitted by the hardware. In particular, it allows applications, the interprocess communication system, the file system, the file cache, and the network subsystem to safely and concurrently share a single physical copy of the data. Protection and security are maintained through a combination of access control and read-only sharing. IO-Lite eliminates all copying and multiple buffering of I/O data, and enables various cross-subsystem optimizations. Experiments with a Web server show performance improvements between 40 and 80% on real workloads as a result of IO-Lite

    Design And Implementation Of A Hardware Level Content Networking Front End Device

    Get PDF
    The bandwidth and speed of network connections are continually increasing. The speed increase in network technology is set to soon outpace the speed increase in CMOS technology. This asymmetrical growth is beginning to causing software applications that once worked with then current levels of network traffic to flounder under the new high data rates. Processes that were once executed in software now have to be executed, partially if not wholly in hardware. One such application that could benefit from hardware implementation is high layer routing. By allowing a network device to peer into higher layers of the OSI model, the device can scan for viruses, provide higher quality-of-service (QoS), and efficiently route packets. This thesis proposes an architecture for a device that will utilize hardware-level string matching to distribute incoming requests for a server farm. The proposed architecture is implemented in VHDL, synthesized, and laid out on an Altera FPGA
    corecore