78 research outputs found
An adaptive trust based service quality monitoring mechanism for cloud computing
Cloud computing is the newest paradigm in distributed computing that delivers computing resources over the Internet as services. Due to the attractiveness of cloud computing, the market is currently flooded with many service providers. This
has necessitated the customers to identify the right one meeting their requirements in terms of service quality. The existing monitoring of service quality has been limited only to quantification in cloud computing. On the other hand, the continuous
improvement and distribution of service quality scores have been implemented in other distributed computing paradigms but not specifically for cloud computing. This research investigates the methods and proposes mechanisms for quantifying and
ranking the service quality of service providers. The solution proposed in this thesis consists of three mechanisms, namely service quality modeling mechanism, adaptive trust computing mechanism and trust distribution mechanism for cloud computing.
The Design Research Methodology (DRM) has been modified by adding phases, means and methods, and probable outcomes. This modified DRM is used throughout this study. The mechanisms were developed and tested gradually until the expected
outcome has been achieved. A comprehensive set of experiments were carried out in a simulated environment to validate their effectiveness. The evaluation has been carried out by comparing their performance against the combined trust model and
QoS trust model for cloud computing along with the adapted fuzzy theory based trust computing mechanism and super-agent based trust distribution mechanism, which were developed for other distributed systems. The results show that the mechanisms are faster and more stable than the existing solutions in terms of reaching the final trust scores on all three parameters tested. The results presented in this thesis are significant
in terms of making cloud computing acceptable to users in verifying the performance of the service providers before making the selection
Assessment â Ein Ansatz zur Evaluierung selbstorganisierender Systeme
Selbstorganisierende Systeme werden in der Kommunikationstechnik als Möglichkeit gesehen, die DienstqualitĂ€t zu verbessern und gleichzeitig den Administrationsaufwand zu senken. Die eingesetzten selbstanpassenden Algorithmen fĂŒhren zu völlig neuen Systemeigenschaften, die beim Testen und der Evaluation der LeistungsfĂ€higkeit dieser Systeme berĂŒcksichtigt werden mĂŒssen. Ausgehend von Definitionen zu kontextsensitiven Systemen und Ăberlegungen zu Schnittstellen zwischen der Testumgebung und dem zu testenden System wird das neuartige, als Assessment bezeichnete Evaluationsverfahren vorgestellt. AbschlieĂend werden Probleme bei der Generierung von TestfĂ€llen diskutiert und dabei neue, anstehende Forschungsaufgaben skizziert. Der Ansatz soll zur Diskussion einer Systemtheorie dieser neuartigen Systeme beitragen und ganz praktisch das Vertrauen in die Arbeit derartiger, hochgradig flexibler Systeme stĂ€rken sowie deren zukĂŒnftigen Einsatz fördern
A Bag-of-Tasks Scheduler Tolerant to Temporal Failures in Clouds
Cloud platforms have emerged as a prominent environment to execute high
performance computing (HPC) applications providing on-demand resources as well
as scalability. They usually offer different classes of Virtual Machines (VMs)
which ensure different guarantees in terms of availability and volatility,
provisioning the same resource through multiple pricing models. For instance,
in Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs
are unused instances available for lower price. Despite the monetary
advantages, a spot VM can be terminated, stopped, or hibernated by EC2 at any
moment.
Using both hibernation-prone spot VMs (for cost sake) and on-demand VMs, we
propose in this paper a static scheduling for HPC applications which are
composed by independent tasks (bag-of-task) with deadline constraints. However,
if a spot VM hibernates and it does not resume within a time which guarantees
the application's deadline, a temporal failure takes place. Our scheduling,
thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2
cloud, respecting its deadline and avoiding temporal failures. To this end, our
algorithm statically creates two scheduling maps: (i) the first one contains,
for each task, its starting time and on which VM (i.e., an available spot or
on-demand VM with the current lowest price) the task should execute; (ii) the
second one contains, for each task allocated on a VM spot in the first map, its
starting time and on which on-demand VM it should be executed to meet the
application deadline in order to avoid temporal failures. The latter will be
used whenever the hibernation period of a spot VM exceeds a time limit.
Performance results from simulation with task execution traces, configuration
of Amazon EC2 VM classes, and VMs market history confirms the effectiveness of
our scheduling and that it tolerates temporal failures
âAn Artificial Intelligence Framework for Supporting Coarse-Grained Workload Classification in Complex Virtual Environments
Cloud-based machine learning tools for enhanced Big Data applications}â, âwhere the main idea is that of predicting the ``\emph{next}'' \emph{workload} occurring against the target Cloud infrastructure via an innovative \emph{ensemble-based approach} that combines the effectiveness of different well-known \emph{classifiers} in order to enhance the whole accuracy of the final classificationâ, âwhich is very relevant at now in the specific context of \emph{Big Data}â. âThe so-called \emph{workload categorization problem} plays a critical role in improving the efficiency and reliability of Cloud-based big data applicationsâ. âImplementation-wiseâ, âour method proposes deploying Cloud entities that participate in the distributed classification approach on top of \emph{virtual machines}â, âwhich represent classical ``commodity'' settings for Cloud-based big data applicationsâ. âGiven a number of known reference workloadsâ, âand an unknown workloadâ, âin this paper we deal with the problem of finding the reference workload which is most similar to the unknown oneâ. âThe depicted scenario turns out to be useful in a plethora of modern information system applicationsâ. âWe name this problem as \emph{coarse-grained workload classification}â, âbecauseâ, âinstead of characterizing the unknown workload in terms of finer behaviorsâ, âsuch as CPUâ, âmemoryâ, âdiskâ, âor network intensive patternsâ, âwe classify the whole unknown workload as one of the (possible) reference workloadsâ. âReference workloads represent a category of workloads that are relevant in a given applicative environmentâ. âIn particularâ, âwe focus our attention on the classification problem described above in the special case represented by \emph{virtualized environments}â. âTodayâ, â\emph{Virtual Machines} (VMs) have become very popular because they offer important advantages to modern computing environments such as cloud computing or server farmsâ. âIn virtualization frameworksâ, âworkload classification is very useful for accountingâ, âsecurity reasonsâ, âor user profilingâ. âHenceâ, âour research makes more sense in such environmentsâ, âand it turns out to be very useful in a special context like Cloud Computingâ, âwhich is emerging nowâ. âIn this respectâ, âour approach consists of running several machine learning-based classifiers of different workload modelsâ, âand then deriving the best classifier produced by the \emph{Dempster-Shafer Fusion}â, âin order to magnify the accuracy of the final classificationâ. âExperimental assessment and analysis clearly confirm the benefits derived from our classification frameworkâ. âThe running programs which produce unknown workloads to be classified are treated in a similar wayâ. âA fundamental aspect of this paper concerns the successful use of data fusion in workload classificationâ. âDifferent types of metrics are in fact fused together using the Dempster-Shafer theory of evidence combinationâ, âgiving a classification accuracy of slightly less than â. âThe acquisition of data from the running processâ, âthe pre-processing algorithmsâ, âand the workload classification are described in detailâ. âVarious classical algorithms have been used for classification to classify the workloadsâ, âand the results are comparedâ
Towards auto-scaling in the cloud: online resource allocation techniques
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains.
Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives
A Novel System Anomaly Prediction System Based on Belief Markov Model and Ensemble Classification
Computer systems are becoming extremely complex, while system anomalies dramatically influence the availability and usability of systems. Online anomaly prediction is an important approach to manage imminent anomalies, and the high accuracy relies on precise system monitoring data. However, precise monitoring data is not easily achievable because of widespread noise. In this paper, we present a method which integrates an improved Evidential Markov model and ensemble classification to predict anomaly for systems with noise. Traditional Markov models use explicit state boundaries to build the Markov chain and then make prediction of different measurement metrics. A Problem arises when data comes with noise because even slight oscillation around the true value will lead to very different predictions. Evidential Markov chain method is able to deal with noisy data but is not suitable in complex data stream scenario. The Belief Markov chain that we propose has extended Evidential Markov chain and can cope with noisy data stream. This study further applies ensemble classification to identify system anomaly based on the predicted metrics. Extensive experiments on anomaly data collected from 66 metrics in PlanetLab have confirmed that our approach can achieve high prediction accuracy and time efficiency
- âŠ