44,576 research outputs found
Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid
The demand on computational and storage resources is growing along with the amount of infor-
mation that needs to be processed and preserved. In order to ease the provisioning of the digital
services to the growing number of consumers, more and more distributed computing systems and
platforms are actively developed and employed. The building block of the distributed computing
infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier
2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid perfor-
mance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring
information analysis. The data analysis approach was based on the adaptive-network-based
fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector
Machine (SVM).
The main object of the research was the digital service, since availability, reliability and ser-
viceability of the computing platform can be measured according to the constant and stable
provisioning of the services. Due to the widely used concept of the service oriented architecture
(SOA) for large computing facilities, in advance knowing of the service state as well as the quick
and accurate detection of its disability allows to perform the proactive management of the com-
puting facility. The proactive management is considered as a core component of the computing
facility management automation concept, such as Autonomic Computing. Thus in time as well
as in advance and accurate identification of the provided service status can be considered as a
contribution to the computing facility management automation, which is directly related to the
provisioning of the stable and reliable computing resources.
Based on the case studies, performed using the GoeGrid monitoring data, consideration of the
approaches as generalized methods for the accurate and fast identification and prediction of the
service status is reasonable. Simplicity and low consumption of the computing resources allow
to consider the methods in the scope of the Autonomic Computing component
Challenges to describe QoS requirements for web services quality prediction to support web services interoperability in electronic commerce
Quality of service (QoS) is significant and necessary for web service applications quality assurance. Furthermore, web services quality has contributed to the successful implementation of Electronic Commerce (EC) applications. However, QoS is still the big issue for web services research and remains one of the main research questions that need to be explored. We believe that QoS should not only be measured but should also be predicted during the development and implementation stages. However, there are challenges and constraints to determine and choose QoS requirements for high quality web services. Therefore, this paper highlights the challenges for the QoS requirements prediction as they are not easy to identify. Moreover, there are many different perspectives and purposes of web services, and various prediction techniques to describe QoS requirements. Additionally, the paper introduces a metamodel as a concept of what makes a good web service
A compositional method for reliability analysis of workflows affected by multiple failure modes
We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach
Big Data Analytics for QoS Prediction Through Probabilistic Model Checking
As competitiveness increases, being able to guaranting QoS of delivered
services is key for business success. It is thus of paramount importance the
ability to continuously monitor the workflow providing a service and to timely
recognize breaches in the agreed QoS level. The ideal condition would be the
possibility to anticipate, thus predict, a breach and operate to avoid it, or
at least to mitigate its effects. In this paper we propose a model checking
based approach to predict QoS of a formally described process. The continous
model checking is enabled by the usage of a parametrized model of the monitored
system, where the actual value of parameters is continuously evaluated and
updated by means of big data tools. The paper also describes a prototype
implementation of the approach and shows its usage in a case study.Comment: EDCC-2014, BIG4CIP-2014, Big Data Analytics, QoS Prediction, Model
Checking, SLA compliance monitorin
A Taxonomy of Workflow Management Systems for Grid Computing
With the advent of Grid and application technologies, scientists and
engineers are building more and more complex applications to manage and process
large data sets, and execute scientific experiments on distributed resources.
Such application scenarios require means for composing and executing complex
workflows. Therefore, many efforts have been made towards the development of
workflow management systems for Grid computing. In this paper, we propose a
taxonomy that characterizes and classifies various approaches for building and
executing workflows on Grids. We also survey several representative Grid
workflow systems developed by various projects world-wide to demonstrate the
comprehensiveness of the taxonomy. The taxonomy not only highlights the design
and engineering similarities and differences of state-of-the-art in Grid
workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
- …