58 research outputs found

    Evaluating Application-Layer Traffic Optimization Cost Metrics for P2P Multimedia Streaming

    Get PDF

    Dynamic content delivery infrastructure deployment using network cloud resources

    Get PDF
    Millionen von Menschen schĂ€tzen die Inhalte und Anwendungen, die das Internet zur VerfĂŒgung stellt. Um der steigenden Nachfrage an populĂ€ren Inhalten wie z.B. High-Definition Video oder Online Social Networks nachzukommen, wurden weit verteilte Content Delivery Infrastructures (CDIs) aufgebaut. Damit CDIs im harten Wettbewerbs bestehen können, suchen sie stĂ€ndig neue Möglichkeiten um laufende Kosten zu senken und Ihre LeistungsfĂ€higkeit zu steigern. Jedoch machen den CDIs eine geringe AgilitĂ€t bei der Allokation von Servern zu schaffen. Informationen zur Steigerung von Effizienz und LeistungsfĂ€higkeit wie z.B. aktuelle Netzwerkbedingungen und prĂ€zise User-Positionen sind den CDIs unbekannt. Obwohl Internet Service Provider (ISPs) diese Informationen besitzen, lassen auch neuere CDI-Architekturen eine mögliche Kollaboration außer Acht. Diese Dissertation untersucht den Einfluss von Kollaboration auf Content Delivery. ZunĂ€chst wird das heutige Design- und Betriebsfeld untersucht. Eine Analyse der operativen Daten eines EuropĂ€ischen Tier-1 ISPs erörtert mögliche Verbesserungen. Erste Ergebnisse zeigen, dass Kollaboration bei der Zuordnung von Usern zu CDI Servern den Netzwerkverkehr lokal begrenzt und die Geschwindigkeit erhöht. Vorhandene Netzwerkpfade eröffnen neue Möglichkeiten der Verkehrssteuerung. Um die Kollaboration zwischen CDIs und ISPs zu ermöglichen, beschreibt diese Arbeit die beiden Key Enabler In-Network Server Allocation und Informed User-Server Assignment. Sie stellt außerdem ein Systemdesign vor, das diese realisiert: NetPaaS (Network Platform as a Service). In-Network Server Allocation nutzt im ISP verteilte Resourcen und aktuelle Virtualisierungstechnologien um eine agile Serverallokation zu ermöglichen. Informed User-Server Assignment erlaubt es ISPs, mögliche NetzwerkengpĂ€sse und prĂ€zise User-Positionen einzukalkulieren und so CDIs den besten Server fĂŒr individuelle Nutzer zu empfehlen. Damit bietet NetPaaS einen zusĂ€tzlichen Freiheitsgrad zur dynamischen Skalierung von Serverinfrastrukturen. Um das Kollaborationspotential von NetPaaS aufzuzeigen, wird erstmals eine Studie mit operativen Daten des grĂ¶ĂŸten kommerziellen CDI und einem EuropĂ€ischen Tier-1 ISP durchgefĂŒhrt. Die Ergebniss zeigen, dass eine auf prĂ€zisen User-Positionen und aktuellen Netzwerkbedingungen basierende dynamische Serverallokation es dem CDI ermöglicht, besser mit der stark schwankenden Nachfrage nach Inhalten zurecht zu kommen und die Geschwindigkeit der Nutzer zu verbessern. DarĂŒber hinaus fĂŒhrt die Nutzung von NetPaaS zu einer besseren Auslastung vorhandener Serverinfrastrukturen und ermöglicht ein verbessertes Verkehrsmanagement im Netz des ISP. Diese Ergebnisse lassen den Schluss zu, dass NetPaaS die LeistungsfĂ€higkeit und Effizienz von CDIs stark verbessert und unter UmstĂ€nden laufende Kosten und Investitionen reduziert. NetPaaS verbessert weiterhin das Verkehrsmanagement des ISP und bietet somit eine echte "win-win" Situation fur CDIs und ISPs.Millions of people value the Internet for the content and the applications it makes available. To cope with the increasing end-user demand for popular and often high volume content, e.g., high-definition video or online social networks, massively distributed Content Delivery Infrastructures (CDIs) have been deployed. However, a highly competitive market requires CDIs to constantly investigate new ways to reduce operational costs and improve delivery performance. Today, CDIs mainly suffer from limited agility in server deployment and are largely unaware of network conditions and precise end-user locations, information that improves the efficiency and performance of content delivery. While newly emerging architectures try to address these challenges, none so far considered collaboration, although ISPs have the information readily at hand. In this thesis, we assess the impact of collaboration on content delivery. We first evaluate the design and operating space of todays content delivery landscape and quantify possible benefits of collaboration by analyzing operational traces from an European Tier-1 ISP. We find that collaboration when assigning end-users to servers highly localizes CDI traffic and improves end-user performance. Moreover, we find significant path diversity which enables new mechanisms for traffic management. We propose two key enablers, namely in-network server allocation and informed user-server assignment, to facilitate CDI-ISP collaboration and present our system design, called NetPaaS (Network Platform as a Service), that realizes them. In- network server allocation offers agile server allocation close to the ISPs end-users leveraging virtualization technology and cloud style resources in the network. In- formed user-server assignment enables ISPs to take network bottlenecks and precise end-user locations into account and to recommend the best possible candidate server for individual end-users to CDIs. Therefore, NetPaaS provides an additional degree of freedom to scale-up or shrink the CDI footprint on demand. To quantify the potential of collaboration with NetPaaS, we perform a first-of-its- kind evaluation based on operational traces from the largest commercial CDI and an European Tier-1 ISP. Our findings reveal that dynamic server allocation based on accurate end-user locations and network conditions enables the CDI to better cope with increasing and highly volatile demand for content and improves the end-users performance. Moreover, recommendations from NetPaaS result in better utilization of existing server infrastructure and enables the ISP to better manage traffic flows inside its network. We conclude, that NetPaaS improves the performance and efficiency of content delivery architectures while potentially reducing the required capital investment and operational costs. Moreover, NetPaaS enables the ISP to achieve traffic engineering goals and therefore offers a true win-win situation to both CDIs and ISPs

    Scalable Streaming Multimedia Delivery using Peer-to-Peer Communication

    Get PDF

    Actas da 10ÂȘ ConferĂȘncia sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    Sustainable Smart Cities and Smart Villages Research

    Get PDF
    ca. 200 words; this text will present the book in all promotional forms (e.g. flyers). Please describe the book in straightforward and consumer-friendly terms. [There is ever more research on smart cities and new interdisciplinary approaches proposed on the study of smart cities. At the same time, problems pertinent to communities inhabiting rural areas are being addressed, as part of discussions in contigious fields of research, be it environmental studies, sociology, or agriculture. Even if rural areas and countryside communities have previously been a subject of concern for robust policy frameworks, such as the European Union’s Cohesion Policy and Common Agricultural Policy Arguably, the concept of ‘the village’ has been largely absent in the debate. As a result, when advances in sophisticated information and communication technology (ICT) led to the emergence of a rich body of research on smart cities, the application and usability of ICT in the context of a village has remained underdiscussed in the literature. Against this backdrop, this volume delivers on four objectives. It delineates the conceptual boundaries of the concept of ‘smart village’. It highlights in which ways ‘smart village’ is distinct from ‘smart city’. It examines in which ways smart cities research can enrich smart villages research. It sheds light on the smart village research agenda as it unfolds in European and global contexts.

    Full Issue

    Get PDF

    QoS-aware Resource-utilisation Self-adaptive (QRS) Framework for Distributed Data Stream Management Systems

    Get PDF
    The last decade witnessed a vast number of Big Data applications in the science and industry fields alike. Such applications generate large amounts of streaming data and real-time event-based information. Such data needs to be analysed under the specific quality of service constraints, which must be done within extremely low latencies. Many distributed data stream processing approaches are based on the best-effort QoS principle that lack the capability of dynamic adaptation to the fluctuations in data input rates. Most of the proposed solutions tend to either drop some of the input data (load shedding) or degrade the level of QoS provided by the system. Another approach is to limit the data ingestion input rate using techniques like backpressure heartbeats, which can affect the worker nodes that causes an output delay. Such approaches are not suitable to handle certain types of mission-critical applications such as critical infrastructure surveillance, monitoring and signalling, vital health care monitoring, and military command and control streaming applications. This research presents a novel QoS-aware, Resource-utilisation Self-adaptive (QRS) Framework for managing data stream processing systems. The framework proposes a comprehensive usage model that encompasses proactive operations followed by simultaneous prompt actions. The simultaneous prompt actions instantly collect and analyse the performance and QoS metrics along with running data streams, ensuring that data does not lose its current values, whereas the proactive operations construct the prediction model that anticipate QoS violations and performance degradation in the system. The model triggers essential decision process for dynamic tuning of resources or adapting a new scheduling strategy. A proof of concept model was built that accurately represents the working conditions of the distributed data stream management ecosystem. The proposed framework is validated and verified. The framework’s several components were fully implemented over the emerging and prevalent distributed data streaming processing system, Apache Storm. The framework performs accurate prediction up to 81% about the system’s capacity to handle data load and input rate. The accuracy reaches up to 100% by incorporating abnormal detection techniques. Moreover, the framework performs well compared with the default round-robin and resource-aware schedulers within Storm. It provides a better ability to handle high data rates by re-balancing the topology and re-scheduling resources based on the prediction models well ahead of any congestion or QoS degradation

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
    • 

    corecore