10 research outputs found

    Management Application Interactions in Software-Based Networks

    Get PDF
    IEEE To support the next wave of networking technologies and services, which will likely involve heterogeneous resources and requirements, rich management functionality will need to be deployed. This raises questions regarding the interoperability of such functionality in an environment where potentially interacting applications operate in parallel. Interactions can cause configuration instabilities and subsequently network performance degradation, especially in the presence of contradicting objectives. Detecting and handling these interactions is therefore essential. In this article we present an overview of the interaction management problem, a critical issue in software-based networks. We review and compare existing solutions proposed in the literature and discuss key challenges toward the development of a generic framework for the automated and real-time management of these interactions

    Analysis of prefetching methods from a graph-theoretical perspective

    Get PDF
    Είναι σημαντικό να τονίσουμε το ρόλο που τα Δίκτυα Διανομής Περιεχομένου (CDNs) παίζουν στις ταχέως αναπτυσσόμενες τοπολογίες του Διαδικτύου. Είναι υπεύθυνα για την εξυπηρέτηση της πλειοψηφίας του περιεχομένου του Διαδικτύου στους τελικούς χρήστες αντιγράφοντας το από το διακομιστή προέλευσης και τοποθετώντας το σε έναν διακομιστή πιο κοντά τους. Τα μεγαλύτερα ίσως προβλήματα που αντιμετωπίζουν τα CDNs έχουν να κάνουν με την επιλογή του περιεχομένου που πρέπει να προανακτηθεί αλλά και την επιλογή ενός κατάλληλου διακομιστή μεσολάβησης στον οποίο θα τοποθετηθεί. Εμείς θα επικεντρωθούμε στο πρόβλημα προανάκτησής περιεχομένου επεκτείνοντας την έρευνα που έγινε από τον Σιδηρόπουλο κ.α. (WorldWideWebJournal, vol. 11, 2008, pp. 39-70). Συγκεκριμένα, θα προσπαθήσουμε να αποφανθούμε πώς η μέθοδος συσταδοποίησής τους μπορεί να δουλέψει σε συγκεκριμένα περιβάλλοντα σε σύγκριση με μια άλλη προσέγγιση που χρησιμοποιείται για την επίλυση του παιχνιδιού επιτήρησης σε γράφους όπως διερευνήθηκε από τον Fomin κ.α. (Proc. 6thInt’lConf. onFUNwithAlgorithms, 2012, pp.166-176) και τον Giroire κ.α. . (JournalofTheoreticalComputerScience, vol. 584, 2015, pp.131-143). Στην πορεία, δίνουμε και έναν άλλο ορισμό για τη συνοχή των συστάδων που καλύπτει και οριακές περιπτώσεις. Τέλος, ορίζουμε ένα καινούριο πρόβλημα, τη διαμέριση δηλαδή του γράφου σε έναν προκαθορισμένο αριθμό ανεξάρτητων συστάδων με βέλτιστη μέση συνοχή.It is important to highlight the role Content Distribution Networks (CDNs) play in rapidly growing Internet topologies. They are responsible for serving the lion's share of Internet content to the end users by replicating it from the origin server and placing it to a caching server closer to them. Probably the biggest issues CDNs have to deal with revolve around deciding which content gets prefetched, in which surrogate/caching server it is placed and allocating storage to each server in an efficient manner. We will focus on the content selection/prefetching problem extending the work done by Sidiropoulos et al. (World Wide Web Journal, vol. 11, 2008, pp. 39-70). Specifically, we are trying to determine how their clustering algorithm can work in specific environments in comparison with an approach used to solve the surveillance game in graphs as discussed by Fomin et al(Proc. 6th Int’l Conf. on FUN with Algorithms, 2012, pp.166-176)and Giroire et al. (Journal of Theoretical Computer Science, vol. 584, 2015, pp.131-143). Along the way, we provide another definition for cluster cohesion that accounts for edge cases. Finally, we define an original problem, which consists of partitioning a graph into a predefined amount of disjoint clusters of optimal average cohesion

    Self-Adaptive Decentralized Monitoring in Software-Defined Networks

    Get PDF
    The Software-Defined Networking (SDN) paradigm can allow network management solutions to automatically and frequently reconfigure network resources. When developing SDNbased management architectures, it is of paramount importance to design a monitoring system that can provide timely and consistent updates to heterogeneous management applications. To support such applications operating with low latency requirements, the monitoring system should scale with increasing network size and provide precise network views with minimum overhead on the available resources. In this paper we present a novel, self-adaptive, decentralized framework for resource monitoring in SDN. Our framework enables accurate statistics to be collected with limited burden on the network resources. This is realized through a self-tuning, adaptive monitoring mechanism that automatically adjusts its settings based on the traffic dynamics. We evaluate our proposal based on a realistic use case scenario, where a content distribution service and an on-demand gaming platform are deployed within an ISP network. The results show that reduced monitoring latencies are obtained with the proposed framework, thus enabling shorter reconfiguration control loops. In addition, the proposed adaptive monitoring method achieves significant gain in terms of monitoring overhead, while preserving the performance of the services considered

    Quality-driven management of video streaming services in segment-based cache networks

    Get PDF

    Low complexity content replication through clustering in Content-Delivery Networks

    Get PDF
    Contemporary Content Delivery Networks (CDN) handle a vast number of content items. At such a scale, the replication schemes require a significant amount of time to calculate and realize cache updates, and hence they are impractical in highly-dynamic environments. This paper introduces cluster-based replication, whereby content items are organized in clusters according to a set of features, given by the cache/network management entity. Each cluster is treated as a single item with certain attributes, e.g., size, popularity, etc. and it is then altogether replicated in network caches so as to minimize overall network traffic. Clustering items reduces replication complexity; hence it enables faster and more frequent caches updates, and it facilitates more accurate tracking of content popularity. However, clustering introduces some performance loss because replication of clusters is more coarse-grained compared to replication of individual items. This tradeoff can be addressed through proper selection of the number and composition of clusters. Due to the fact that the exact optimal number of clusters cannot be derived analytically, an efficient approximation method is proposed. Extensive numerical evaluations of time-varying content popularity scenarios allow to argue that the proposed approach reduces core network traffic, while being robust to errors in popularity estimation

    Hybrid multi-tenant cache management for virtualized ISP networks

    Get PDF
    In recent years, Internet Service Providers (ISPs) have started to deploy Telco Content Delivery Networks (Telco CDNs) to reduce the pressure on their network resources. The deployment of Telco CDNs results in reduced ISP bandwidth utilization and improved service quality by bringing the content closer to the end-users. Furthermore, virtualization of storage and networking resources can open up new business models by enabling the ISP to simultaneously lease its Telco CDN infrastructure to multiple third parties. Previous work has shown that multi-tenant proactive resource allocation and content placement can significantly reduce the load on the ISP network. However, the performance of this approach strongly depends on the prediction accuracy for future content requests. In this paper, a hybrid cache management approach is proposed where proactive content placement and traditional reactive caching strategies are combined. In this way, content placement and server selection can be optimized across tenants and users, based on predicted content popularity and the geographical distribution of requests, while simultaneously providing reactivity to unexpected changes in the request pattern. Based on a Video-on-Demand (VoD) production request trace, it is shown that the total hit ratio can be increased by 43% while using 5% less bandwidth compared to the traditional Least Recently Used (LRU) caching strategy. Furthermore, the proposed approach requires 39% less migration overhead compared to the proactive placement approach we previously proposed in Claeys et al. (2014b) and achieves a hit ratio increase of 19% and bandwidth usage reduction of 7% in the evaluated VoD scenarios and topology

    Hybrid multi-tenant cache management for virtualized ISP networks

    Get PDF
    In recent years, Internet Service Providers (ISPs) have started to deploy Telco Content Delivery Networks (Telco CDNs) to reduce the pressure on their network resources. The deployment of Telco CDNs results in reduced ISP bandwidth utilization and improved service quality by bringing the content closer to the end-users. Furthermore, virtualization of storage and networking resources can open up new business models by enabling the ISP to simultaneously lease its Telco CDN infrastructure to multiple third parties. Previous work has shown that multi-tenant proactive resource allocation and content placement can significantly reduce the load on the ISP network. However, the performance of this approach strongly depends on the prediction accuracy for future content requests. In this paper, a hybrid cache management approach is proposed where proactive content placement and traditional reactive caching strategies are combined. In this way, content placement and server selection can be optimized across tenants and users, based on predicted content popularity and the geographical distribution of requests, while simultaneously providing reactivity to unexpected changes in the request pattern. Based on a Video-on-Demand (VoD) production request trace, it is shown that the total hit ratio can be increased by 43% while using 5% less bandwidth compared to the traditional Least Recently Used (LRU) caching strategy. Furthermore, the proposed approach requires 39% less migration overhead compared to the proactive placement approach we previously proposed in Claeys et al. (2014b) and achieves a hit ratio increase of 19% and bandwidth usage reduction of 7% in the evaluated VoD scenarios and topology

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery
    corecore