2,450 research outputs found

    Connectivity Management for HetNets based on the Principles of Autonomicity and Context-Awareness

    Get PDF
    Στο περιβάλλον του Διαδικτύου του Μέλλοντος, η Πέμπτη γενιά (5G) δικτύων έχει ήδη αρχίσει να καθιερώνεται. Τα δίκτυα 5G αξιοποιούν υψηλότερες συχνότητες παρέχοντας μεγαλύτερο εύρος ζώνης, ενώ υποστηρίζουν εξαιρετικά μεγάλη πυκνότητα σε σταθμούς βάσης και κινητές συσκευές, σχηματίζοντας ένα περιβάλλον ετερογενών δικτύων, το οποίο στοχεύει στο να καλυφθούν οι απαιτήσεις της απόδοσης ως προς την μικρότερη δυνατή συνολική χρονοκαθυστέρηση και κατανάλωση ενέργειας. Η αποδοτική διαχείριση της συνδεσιμότητας σε ένα τόσο ετερογενές δικτυακό περιβάλλον αποτελεί ανοιχτό πρόβλημα, με σκοπό να υποστηρίζεται η κινητικότητα των χρηστών σε δίκτυα διαφορετικών τεχνολογιών και βαθμίδων, αντιμετωπίζοντας θέματα πολυπλοκότητας και διαλειτουργικότητας, υποστηρίζοντας τις απαιτήσεις των τρεχουσών εφαρμογών και των προτιμήσεων των χρηστών και διαχειρίζοντας ταυτόχρονα πολλαπλές δικτυακές διεπαφές. Η συλλογή, η μοντελοποίηση, η διεξαγωγή συμπερασμάτων και η κατανομή πληροφορίας περιεχομένου σε σχέση με δεδομένα αισθητήρων θα παίξουν κρίσιμο ρόλο σε αυτήν την πρόκληση. Με βάση τα παραπάνω, κρίνεται σκόπιμη η αξιοποίηση των αρχών της επίγνωσης περιεχομένου και της αυτονομικότητας, καθώς επιτρέπουν στις δικτυακές οντότητες να είναι ενήμερες του εαυτού τους και του περιβάλλοντός τους, καθώς και να αυτοδιαχειρίζονται τις λειτουργίες τους ώστε να πετυχαίνουν συγκεκριμένους στόχους. Επιπλέον, χρειάζεται ακριβής ποσοτική αξιολόγηση της απόδοσης λύσεων διαχείρισης της συνδεσιμότητας για ετερογενή δίκτυα, οι οποίες παρουσιάζουν διαφορετικές στρατηγικές επίγνωσης περιβάλλοντος, απαιτώντας μια μεθοδολογία που να είναι περιεκτική και γενικά εφαρμόσιμη ώστε να καλύπτει διαφορετικές προσεγγίσεις, καθώς οι υπάρχουσες μεθοδολογίες στην βιβλιογραφία είναι σχετικά περιορισμένες. Tο σύνολο της μελέτης επικεντρώνεται σε δύο θεματικούς άξονες. Στο πρώτο θεματικό μέρος της διατριβής, αναλύεται ο ρόλος της επίγνωσης περιβάλλοντος και της αυτονομικότητας, σε σχέση με την διαχείριση της συνδεσιμότητας, αναπτύσσοντας ένα πλαίσιο ταξινόμησης και κατηγοριοποίησης, επεκτείνοντας την τρέχουσα βιβλιογραφία. Με βάση το προαναφερθέν πλαίσιο, ταξινομήθηκαν και αξιολογήθηκαν λύσεις για την υποστήριξη της κινητικότητας σε ετερογενή δίκτυα, οι οποίες δύνανται να θεωρηθούν ότι παρουσιάζουν επίγνωση περιβάλλοντος και αυτο-διαχειριστικά χαρακτηριστικά. Επιπλέον, μελετήθηκε κατά πόσον οι αποφάσεις που λαμβάνονται ως προς την επιλογή του κατάλληλου δικτύου, σύμφωνα με την κάθε λύση, είναι αποτελεσματικές και προτάθηκαν τρόποι βελτιστοποίησης των υπαρχουσών αρχιτεκτονικών, καθώς και προτάσεων προς περαιτέρω ανάπτυξη σχετικών μελλοντικών λύσεων. Στο δεύτερο θεματικό μέρος της διατριβής, αναπτύχθηκε μια ευέλικτη αναλυτική μεθοδολογία, περιλαμβάνοντας όλους τους παράγοντες που μπορούν να συνεισφέρουν στην συνολική χρονοκαθυστέρηση, λαμβάνοντας υπόψιν την σηματοδοσία, την επεξεργαστική επιβάρυνση και την συμφόρηση (μελέτη ουράς), επεκτείνοντας την τρέχουσα βιβλιογραφία. Η μεθοδολογία είναι περιεκτική, ενώ ταυτόχρονα προσφέρει κλειστού τύπου λύσεις και έχει την δυνατότητα να προσαρμόζεται σε διαφορετικές προσεγγίσεις. Προς απόδειξη αυτού, εφαρμόσαμε την μεθοδολογία σε δύο λύσεις με διαφορετική στρατηγική επίγνωσης περιβάλλοντος (μια μεταδραστική και μια προδραστική). Και για τις δύο προσεγγίσεις, τα αναλυτικά αποτελέσματα επιβεβαιώθηκαν από προσομοιώσεις, επιβεβαιώνοντας την αποτελεσματικότητα και την ακρίβεια της αναλυτικής μεθοδολογίας. Επιπλέον, αποδείχθηκε ότι η προδραστική προσέγγιση εμφανίζει καλύτερη απόδοση ως προς την συνολική χρονοκαθυστέρηση, ενώ χρειάζεται σημαντικά λιγότερους επεξεργαστικούς πόρους, παρουσιάζοντας πιθανά οφέλη και στην συνολική ενεργειακή κατανάλωση και στα λειτουργικά και κεφαλαιουχικά κόστη (OPEX και CAPEX)

    Deadline constrained prediction of job resource requirements to manage high-level SLAs for SaaS cloud providers

    Get PDF
    For a non IT expert to use services in the Cloud is more natural to negotiate the QoS with the provider in terms of service-level metrics –e.g. job deadlines– instead of resourcelevel metrics –e.g. CPU MHz. However, current infrastructures only support resource-level metrics –e.g. CPU share and memory allocation– and there is not a well-known mechanism to translate from service-level metrics to resource-level metrics. Moreover, the lack of precise information regarding the requirements of the services leads to an inefficient resource allocation –usually, providers allocate whole resources to prevent SLA violations. According to this, we propose a novel mechanism to overcome this translation problem using an online prediction system which includes a fast analytical predictor and an adaptive machine learning based predictor. We also show how a deadline scheduler could use these predictions to help providers to make the most of their resources. Our evaluation shows: i) that fast algorithms are able to make predictions with an 11% and 17% of relative error for the CPU and memory respectively; ii) the potential of using accurate predictions in the scheduling compared to simple yet well-known schedulers.Preprin

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Autonomic Management in a Distributed Storage System

    Get PDF
    This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.Comment: PhD Thesis, University of St Andrews, 2009. Supervisor: Graham Kirb

    Cloud-assisted body area networks: state-of-the-art and future challenges

    Get PDF
    Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed

    Contributions to topology discovery, self-healing and VNF placement in software-defined and virtualized networks

    Get PDF
    The evolution of information and communication technologies (e.g. cloud computing, the Internet of Things (IoT) and 5G, among others) has enabled a large market of applications and network services for a massive number of users connected to the Internet. Achieving high programmability while decreasing complexity and costs has become an essential aim of networking research due to the ever-increasing pressure generated by these applications and services. However, meeting these goals is an almost impossible task using traditional IP networks. Software-Defined Networking (SDN) is an emerging network architecture that could address the needs of service providers and network operators. This new technology consists in decoupling the control plane from the data plane, enabling the centralization of control functions on a concentrated or distributed platform. It also creates an abstraction between the network infrastructure and network applications, which allows for designing more flexible and programmable networks. Recent trends of increased user demands, the explosion of Internet traffic and diverse service requirements have further driven the interest in the potential capabilities of SDN to enable the introduction of new protocols and traffic management models. This doctoral research is focused on improving high-level policies and control strategies, which are becoming increasingly important given the limitations of current solutions for large-scale SDN environments. Specifically, the three largest challenges addressed in the development of this thesis are related to the processes of topology discovery, fault recovery and Virtual Network Function (VNF) placement in software-defined and virtualized networks. These challenges led to the design of a set of effective techniques, ranging from network protocols to optimal and heuristic algorithms, intended to solve existing problems and contribute to the deployment and adoption of such programmable networks.For the first challenge, this work presents a novel protocol that, unlike existing approaches, enables a distributed layer 2 discovery without the need for previous IP configurations or controller knowledge of the network. By using this mechanism, the SDN controller can discover the network view without incurring scalability issues, while taking advantage of the shortest control paths toward each switch. Moreover, this novel approach achieves noticeable improvement with respect to state-of-the-art techniques. To address the resilience concern of SDN, we propose a self-healing mechanism that recovers the control plane connectivity in SDN-managed environments without overburdening the controller performance. The main idea underlying this proposal is to enable real-time recovery of control paths in the face of failures without the intervention of a controller. Obtained results show that the proposed approach recovers the control topology efficiently in terms of time and message load over a wide range of generated networks. The third contribution made in this thesis combines topology knowledge with bin packing techniques in order to efficiently place the required VNF. An online heuristic algorithm with low-complexity was developed as a suitable solution for dynamic infrastructures. Extensive simulations, using network topologies representative of different scales, validate the good performance of the proposed approaches regarding the number of required instances and the delay among deployed functions. Additionally, the proposed heuristic algorithm improves the execution times by a fifth order of magnitude compared to the optimal formulation of this problem.Postprint (published version

    Riding out of the storm: How to deal with the complexity of grid and cloud management

    Get PDF
    Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management

    Self-management for large-scale distributed systems

    Get PDF
    Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control
    corecore