11 research outputs found
On the Modelling of CDNaaS Deployment
With the increasing demand for over the top media content, understanding user perception and Quality of Experience (QoE) estimation have become a major business necessity for service providers. Online video broadcasting is a multifaceted procedure and calculation of performance for the components that build up a streaming platform requires an overall understanding of the Content Delivery Network as a service (CDNaaS) concept. Therefore, to evaluate delivery quality and predicting user perception while considering NFV (Network Function Virtualization) and limited cloud resources, a relationship between these concepts is required. In this paper, a generalized mathematical model to calculate the success rate of different tiers of online video delivery system is presented. Furthermore, an algorithm that indicates the correct moment to switch between CDNs is provided to improve throughput efficiency while maintaining QoE and keeping the cloud hosting costs as lowest possible
Efficiency gains due to network function sharing in CDN-as-a-Service slicing scenarios
Proceedings of: IEEE 7th International Conference on Network Softwarization (NetSoft), 28 June-2 July 2021, Tokyo, Japan.The consumption of video contents is currently dominating the traffic observed in ISP networks. The distribution of that content is usually performed leveraging on CDN caches storing and delivering multimedia. The advent of virtualization is bringing attention to the CDN as use case for virtualizing the cache function. In parallel, there is a trend on sharing network infrastructures as a way of reducing deployment costs by ISPs. Then, an interesting scenario emerges when considering the possibility of sharing virtualized cache functions among ISPs sharing a common physical infrastructure, mostly considering that usually those ISPs offer similar content catalogues to final end users. This paper investigates through simulations the potential efficiencies that can be achieved when sharing a virtual cache function if compared to the classical approach of independent virtual caches operated per ISP.This work has been partly funded by the project 5GROWTH (Grant Agreement no. 856709)
A novel cost-based replica server placement for optimal service quality in cloud-based content delivery network
Replica server placement is one of the crucial concerns for a given geographic diversity associated with placement problems in content delivery network (CDN). After reviewing the existing literatures, it is noted that studies are more for solving placement problem in conventional CDN and not much over cloud-based CDN architectures, which some few studies are reported towards replica selection are much in its nascent stages of development. Moreover, such models are not benchmarked or practically assessed to prove its effectiveness. Hence, the proposed study introduces a novel design of computational framework associated with cloud-based CDN which can facilitate cost-effective replica server management for enhanced service delivery. Implemented using analytical research methodology, the simulated study outcome shows that proposed scheme offers reduced cost, reduced resource dependencies, reduced latency, and faster processing time in contrast to existing models of replica server placement
Towards simulation and optimization of cache placement on large virtual Content Distribution Networks
IP video traffic is forecast to be 82% of all IP traffic by 2022. Traditionally, Content Distribution Networks (CDN) were used extensively to meet the quality of service levels for IP video services. To handle the dramatic growth in video traffic, CDN operators are migrating their infrastructure to the cloud and fog in order to leverage its greater availability and flexibility. For hyper-scale deployments, energy consumption, cache placement, and resource availability can be analyzed using simulation in order to improve resource utilization and performance. Recently, a discrete-time simulator for modelling hierarchical virtual CDNs (vCDNs) was proposed with reduced memory requirements and increased performance using multi-core systems to cater to the scale and complexity of these networks.
The first iteration of this discrete-time simulator featured a number of limitations impacting accuracy and applicability: it supports only tree-based topology structures, the results are computed per level, and requests of the same content differ only in time duration. In this paper, we present an improved simulation framework that (a) supports graph-based network topologies, (b) requests have been reconstituted for differentiation of requirements, and (c) statistics are now computed per site and network metrics per link, improving the granularity and parallel performance. Moreover, we also propose a two-phase optimization scheme that makes use of simulation outputs to guide the search for optimal cache placements. In order to evaluate our proposal, we simulate a vCDN network based on real traces obtained from the BT vCDN infrastructure and analyze performance and scalability aspects
Managing Distributed Cloud Applications and Infrastructure
The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
The cost of quality of service : SLA aware VNF placement and routing using column generation
In the Network Function Virtualization (NFV) paradigm, Internet Service Providers (ISP) provide network services to customers by routing and processing traffic through an ordered sequence of Virtual Network Functions (VNF). The Quality of the Service (QoS) depends on the quantity and relative placement of the VNFs, and is quantified by a set of Key Performance Indicators (KPIs) in a Service Level Agreement (SLA): a contract reached between the ISP and customer. In order to provide the service in line with the SLA, ISPs must consider the SLA constraints directly when placing VNFs and provisioning the network services in the physical network infrastructure. In this paper, we present a VNF placement and routing algorithm based on the column generation method which iterates between generating improving paths, and optimising the placement of the VNFs given the generated paths. SLA constraints are modelled as soft constraints for which violation incurs a cost, the sum of which is minimised. Unlike prior approaches, we consider the throughput, latency and availability SLA constraints. We validate our approach against a heuristic greedy algorithm, on a multi-tiered Radio Access Network (RAN) and show that the column generation method provides solutions with significantly lower SLA violation cost versus the greedy approach, while still being able to solve problems of a practical size. We also highlight that satisfying QoS can significantly increase the number of host nodes required, thus a trade-off exists between QoS and operational cost which should be explored further
Managing Distributed Cloud Applications and Infrastructure
The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
SLEPX: An Efficient Lightweight Cipher for Visual Protection of Scalable HEVC Extension
This paper proposes a lightweight cipher scheme aimed at the scalable extension of the High Efficiency Video Coding (HEVC) codec, referred to as the Scalable HEVC (SHVC) standard. This stream cipher, Symmetric Cipher for Lightweight Encryption based on Permutation and EXlusive OR (SLEPX), applies Selective Encryption (SE) over suitable coding syntax elements in the SHVC layers. This is achieved minimal computational complexity and delay. The algorithm also conserves most SHVC functionalities, i.e. preservation of bit-length, decoder format-compliance, and error resilience. For comparative analysis, results were taken and compared with other state-of-art ciphers i.e. Exclusive-OR (XOR) and the Advanced Encryption Standard (AES). The performance of SLEPX is also compared with existing video SE solutions to confirm the efficiency of the adopted scheme. The experimental results demonstrate that SLEPX is as secure as AES in terms of visual protection, while computationally efficient comparable with a basic XOR cipher. Visual quality assessment, security analysis and extensive cryptanalysis (based on numerical values of selected binstrings) also showed the effectiveness of SLEPX’s visual protection scheme for SHVC compared to previously-employed cryptographic technique
Automatic virtual network embedding: A deep reinforcement learning approach with graph convolutional networks
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Virtual network embedding arranges virtual network services onto substrate network components. The performance of embedding algorithms determines the effectiveness and
efficiency of a virtualized network, making it a critical part of the
network virtualization technology. To achieve better performance,
the algorithm needs to automatically detect the network status
which is complicated and changes in a time-varying manner,
and to dynamically provide solutions that can best fit the current
network status. However, most existing algorithms fail to provide
automatic embedding solutions in an acceptable running time.
In this paper, we combine deep reinforcement learning with
a novel neural network structure based on graph convolutional networks, and propose a new and efficient algorithm for
automatic virtual network embedding. In addition, a parallel
reinforcement learning framework is used in training along
with a newly-designed multi-objective reward function, which
has proven beneficial to the proposed algorithm for automatic
embedding of virtual networks. Extensive simulation results
under different scenarios show that our algorithm achieves best
performance on most metrics compared with the existing stateof-the-art solutions, with upto 39.6% and 70.6% improvement
on acceptance ratio and average revenue, respectively. Moreover,
the results also demonstrate that the proposed solution possesses
good robustness
Untersuchungen zur Risikominimierungstechnik Stealth Computing für verteilte datenverarbeitende Software-Anwendungen mit nutzerkontrollierbar zusicherbaren Eigenschaften
Die Sicherheit und Zuverlässigkeit von Anwendungen, welche schutzwürdige Daten verarbeiten, lässt sich durch die geschützte Verlagerung in die Cloud mit einer Kombination aus zielgrößenabhängiger Datenkodierung, kontinuierlicher mehrfacher Dienstauswahl, dienstabhängiger optimierter Datenverteilung und kodierungsabhängiger Algorithmen deutlich erhöhen und anwenderseitig kontrollieren. Die Kombination der Verfahren zu einer anwendungsintegrierten Stealth-Schutzschicht ist eine notwendige Grundlage für die Konstruktion sicherer Anwendungen mit zusicherbaren Sicherheitseigenschaften im Rahmen eines darauf angepassten Softwareentwicklungsprozesses.:1 Problemdarstellung
1.1 Einführung
1.2 Grundlegende Betrachtungen
1.3 Problemdefinition
1.4 Einordnung und Abgrenzung
2 Vorgehensweise und Problemlösungsmethodik
2.1 Annahmen und Beiträge
2.2 Wissenschaftliche Methoden
2.3 Struktur der Arbeit
3 Stealth-Kodierung für die abgesicherte Datennutzung
3.1 Datenkodierung
3.2 Datenverteilung
3.3 Semantische Verknüpfung verteilter kodierter Daten
3.4 Verarbeitung verteilter kodierter Daten
3.5 Zusammenfassung der Beiträge
4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen
4.1 Überblick über Plattformkonzepte und -dienste
4.2 Netzwerkmultiplexerschnittstelle
4.3 Dateispeicherschnittstelle
4.4 Datenbankschnittstelle
4.5 Stromspeicherdienstschnittstelle
4.6 Ereignisverarbeitungsschnittstelle
4.7 Dienstintegration
4.8 Entwicklung von Anwendungen
4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen
4.10 Zusammenfassung der Beiträge
5 Szenarien und Anwendungsfelder
5.1 Online-Speicherung von Dateien mit Suchfunktion
5.2 Persönliche Datenanalyse
5.3 Mehrwertdienste für das Internet der Dinge
6 Validierung
6.1 Infrastruktur für Experimente
6.2 Experimentelle Validierung der Datenkodierung
6.3 Experimentelle Validierung der Datenverteilung
6.4 Experimentelle Validierung der Datenverarbeitung
6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung
6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration
6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung
6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung
6.9 Integriertes Szenario: Online-Speicherung von Dateien
6.10 Integriertes Szenario: Persönliche Datenanalyse
6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge
7 Zusammenfassung
7.1 Zusammenfassung der Beiträge
7.2 Kritische Diskussion und Bewertung
7.3 Ausblick
Verzeichnisse
Tabellenverzeichnis
Abbildungsverzeichnis
Listings
Literaturverzeichnis
Symbole und Notationen
Software-Beiträge für native Cloud-Anwendungen
Repositorien mit ExperimentdatenThe security and reliability of applications processing sensitive data can be significantly increased and controlled by the user by a combination of techniques. These encompass a targeted data coding, continuous multiple service selection, service-specific optimal data distribution and coding-specific algorithms. The combination of the techniques towards an application-integrated stealth protection layer is a necessary precondition for the construction of safe applications with guaranteeable safety properties in the context of a custom software development process.:1 Problemdarstellung
1.1 Einführung
1.2 Grundlegende Betrachtungen
1.3 Problemdefinition
1.4 Einordnung und Abgrenzung
2 Vorgehensweise und Problemlösungsmethodik
2.1 Annahmen und Beiträge
2.2 Wissenschaftliche Methoden
2.3 Struktur der Arbeit
3 Stealth-Kodierung für die abgesicherte Datennutzung
3.1 Datenkodierung
3.2 Datenverteilung
3.3 Semantische Verknüpfung verteilter kodierter Daten
3.4 Verarbeitung verteilter kodierter Daten
3.5 Zusammenfassung der Beiträge
4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen
4.1 Überblick über Plattformkonzepte und -dienste
4.2 Netzwerkmultiplexerschnittstelle
4.3 Dateispeicherschnittstelle
4.4 Datenbankschnittstelle
4.5 Stromspeicherdienstschnittstelle
4.6 Ereignisverarbeitungsschnittstelle
4.7 Dienstintegration
4.8 Entwicklung von Anwendungen
4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen
4.10 Zusammenfassung der Beiträge
5 Szenarien und Anwendungsfelder
5.1 Online-Speicherung von Dateien mit Suchfunktion
5.2 Persönliche Datenanalyse
5.3 Mehrwertdienste für das Internet der Dinge
6 Validierung
6.1 Infrastruktur für Experimente
6.2 Experimentelle Validierung der Datenkodierung
6.3 Experimentelle Validierung der Datenverteilung
6.4 Experimentelle Validierung der Datenverarbeitung
6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung
6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration
6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung
6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung
6.9 Integriertes Szenario: Online-Speicherung von Dateien
6.10 Integriertes Szenario: Persönliche Datenanalyse
6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge
7 Zusammenfassung
7.1 Zusammenfassung der Beiträge
7.2 Kritische Diskussion und Bewertung
7.3 Ausblick
Verzeichnisse
Tabellenverzeichnis
Abbildungsverzeichnis
Listings
Literaturverzeichnis
Symbole und Notationen
Software-Beiträge für native Cloud-Anwendungen
Repositorien mit Experimentdate