8 research outputs found

    Leveraging Self-Adaptive Dynamic Software Architecture

    Get PDF
    Software systems are growing complex due to the technological innovations and integration of businesses. There is ever increasing need for changes in the software systems. However, incorporating changes is time consuming and costly. Self-adaptation is therefore is the desirable feature of any software that can have ability to adapt to changes without the need for manual reengineering and software update. To state it differently robust, self adaptive dynamic software architecture is the need of the hour. Unfortunately, the existing solutions available for self-adaptation need human intervention and have limitations. The architecture like Rainbow achieved self-adaptation. However, it needs to be improves in terms of quality of service analysis and mining knowledge and reusing it for making well informed decisions in choosing adaptation strategies. In this paper we proposed and implemented Enhanced Self-Adaptive Dynamic Software Architecture (ESADSA) which provides automatic self-adaptation based on the runtime requirements of the system. It decouples self-adaptation from target system with loosely coupled approach while preserves cohesion of the target system. We built a prototype application that runs in distributed environment for proof of concept. The empirical results reveal significance leap forward in improving dynamic self-adaptive software architecture

    Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs

    Get PDF
    © 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties

    An autonomic prediction suite for cloud resource provisioning

    Get PDF
    One of the challenges of cloud computing is effective resource management due to its auto-scaling feature. Prediction techniques have been proposed for cloud computing to improve cloud resource management. This paper proposes an autonomic prediction suite to improve the prediction accuracy of the auto-scaling system in the cloud computing environment. Towards this end, this paper proposes that the prediction accuracy of the predictive auto-scaling systems will increase if an appropriate time-series prediction algorithm based on the incoming workload pattern is selected. To test the proposition, a comprehensive theoretical investigation is provided on different risk minimization principles and their effects on the accuracy of the time-series prediction techniques in the cloud environment. In addition, experiments are conducted to empirically validate the theoretical assessment of the hypothesis. Based on the theoretical and the experimental results, this paper designs a self-adaptive pred

    Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures

    Get PDF
    Model-based techniques are a powerful approach to engineering autonomic and self-adaptive systems. This thesis presents a model-based approach for proactive and autonomic performance-aware resource management in dynamic IT infrastructures. Core of the approach is an architecture-level modeling language to describe performance and resource management related aspects in such environments. With this approach, it is possible to autonomically find suitable system configurations at the model level

    Service-Level-Driven Load Scheduling and Balancing in Multi-Tier Cloud Computing

    Get PDF
    Cloud computing environments often deal with random-arrival computational workloads that vary in resource requirements and demand high Quality of Service (QoS) obligations. A Service Level Agreement (SLA) is employed to govern the QoS obligations of the cloud service provider to the client. A service provider conundrum revolves around the desire to maintain a balance between the limited resources available for computing and the high QoS requirements of the varying random computing demands. Any imbalance in managing these conïŹ‚icting objectives may result in either dissatisïŹed clients that can incur potentially signiïŹcant commercial penalties, or an over-sourced cloud computing environment that can be signiïŹcantly costly to acquire and operate. To optimize response to such client demands, cloud service providers organize the cloud computing environment as a multi-tier architecture. Each tier executes its designated tasks and passes them to the next tier, in a fashion similar, but not identical, to the traditional job-shop environments. Each tier consists of multiple computing resources, though an optimization process must take place to assign and schedule the appropriate tasks of the job on the resources of the tier, so as to meet the job’s QoS expectations. Thus, scheduling the clients’ workloads as they arrive at the multi-tier cloud environment to ensure their timely execution has been a central issue in cloud computing. Various approaches have been presented in the literature to address this problem: Join-Shortest-Queue (JSQ), Join-Idle-Queue (JIQ), enhanced Round Robin (RR) and Least Connection (LC), as well as enhanced MinMin and MaxMin, to name a few. This thesis presents a service-level-driven load scheduling and balancing framework for multi-tier cloud computing. A model is used to quantify the penalty a cloud service provider incurs as a function of the jobs’ total waiting time and QoS violations. This model facilitates penalty mitigation in situations of high demand and resource shortage. The framework accounts for multi-tier job execution dependencies in capturing QoS violation penalties as the client jobs progress through subsequent tiers, thus optimizing the performance at the multi-tier level. Scheduling and balancing operations are employed to distribute client jobs on resources such that the total waiting time and, hence, SLA violations of client jobs are minimized. Optimal job allocation and scheduling is an NP combinatorial problem. The dynamics of random job arrival make the optimality goal even harder to achieve and maintain as new jobs arrive at the environment. Thus, the thesis proposes a queue virtualization as an abstract that allows jobs to migrate between resources within a given tier, as well, altering the sequencing of job execution within a given resource, during the optimization process. Given the computational complexity of the job allocation and scheduling problem, a genetic algorithm is proposed to seek optimal solutions. The queue virtualization is proposed as a basis for deïŹning chromosome structure and operations. As computing jobs tend to vary with respect to delay tolerance, two SLA scenarios are tackled, that is, equal cost of time delays and diïŹ€erentiated cost of time delays. Experimental work is conducted to investigate the performance of the proposed approach both at the tier and system level

    IntĂ©gration de l’analyse prĂ©dictive dans des systĂšmes auto-adaptatifs

    Get PDF
    In this thesis we proposed a proactive self-adaptation by integrating predictive analysis into two phases of the software process. At design time, we propose a predictive modeling process, which includes the activities: define goals, collect data, select model structure, prepare data, build candidate predictive models, training, testing and cross-validation of the candidate models and selection of the ''best'' models based on a measure of model goodness. At runtime, we consume the predictions from the selected predictive models using the running system actual data. Depending on the input data and the time allowed for learning algorithms, we argue that the software system can foresee future possible input variables of the system and adapt proactively in order to accomplish middle and long term goals and requirements.Au cours des derniĂšres annĂ©es, il y a un intĂ©rĂȘt croissant pour les systĂšmes logiciels capables de faire face Ă  la dynamique des environnements en constante Ă©volution. Actuellement, les systĂšmes auto-adaptatifs sont nĂ©cessaires pour l’adaptation dynamique Ă  des situations nouvelles en maximisant performances et disponibilitĂ©. Les systĂšmes ubiquitaires et pervasifs fonctionnent dans des environnements complexes et hĂ©tĂ©rogĂšnes et utilisent des dispositifs Ă  ressources limitĂ©es oĂč des Ă©vĂ©nements peuvent compromettre la qualitĂ© du systĂšme. En consĂ©quence, il est souhaitable de s’appuyer sur des mĂ©canismes d’adaptation du systĂšme en fonction des Ă©vĂ©nements se produisant dans le contexte d’exĂ©cution. En particulier, la communautĂ© du gĂ©nie logiciel pour les systĂšmes auto-adaptatif (Software Engineering for Self-Adaptive Systems - SEAMS) s’efforce d’atteindre un ensemble de propriĂ©tĂ©s d’autogestion dans les systĂšmes informatiques. Ces propriĂ©tĂ©s d’autogestion comprennent les propriĂ©tĂ©s dites self-configuring, self-healing, self-optimizing et self-protecting. Afin de parvenir Ă  l’autogestion, le systĂšme logiciel met en Ɠuvre un mĂ©canisme de boucle de commande autonome nommĂ© boucle MAPE-K [78]. La boucle MAPE-K est le paradigme de rĂ©fĂ©rence pour concevoir un logiciel auto-adaptatif dans le contexte de l’informatique autonome. Cet modĂšle se compose de capteurs et d’effecteurs ainsi que quatre activitĂ©s clĂ©s : Monitor, Analyze, Plan et Execute, complĂ©tĂ©es d’une base de connaissance appelĂ©e Knowledge, qui permet le passage des informations entre les autres activitĂ©s [78]. L’étude de la littĂ©rature rĂ©cente sur le sujet [109, 71] montre que l’adaptation dynamique est gĂ©nĂ©ralement effectuĂ©e de maniĂšre rĂ©active, et que dans ce cas les systĂšmes logiciels ne sont pas en mesure d’anticiper des situations problĂ©matiques rĂ©currentes. Dans certaines situations, cela pourrait conduire Ă  des surcoĂ»ts inutiles ou des indisponibilitĂ©s temporaires de ressources du systĂšme. En revanche, une approche proactive n’est pas simplement agir en rĂ©ponse Ă  des Ă©vĂ©nements de l’environnement, mais a un comportement dĂ©terminĂ© par un but en prenant par anticipation des initiatives pour amĂ©liorer la performance du systĂšme ou la qualitĂ© de service

    Architecture-Level Software Performance Models for Online Performance Prediction

    Get PDF
    Proactive performance and resource management of modern IT infrastructures requires the ability to predict at run-time, how the performance of running services would be affected if the workload or the system changes. In this thesis, modeling and prediction facilities that enable online performance prediction during system operation are presented. Analyses about the impact of reconfigurations and workload trends can be conducted on the model level, without executing expensive performance tests
    corecore