2,088 research outputs found

    Extended Fault Taxonomy of SOA-Based Systems

    Get PDF
    Service Oriented Architecture (SOA) is considered as a standard for enterprise software development. The main characteristics of SOA are dynamic discovery and composition of software services in a heterogeneous environment. These properties pose newer challenges in fault management of SOA-based systems (SBS). A proper understanding of different faults in an SBS is very necessary for effective fault handling. A comprehensive three-fold fault taxonomy is presented here that covers distributed, SOA specific and non-functional faults in a holistic manner. A comprehensive fault taxonomy is a key starting point for providing techniques and methods for accessing the quality of a given system. In this paper, an attempt has been made to outline several SBSs faults into a well-structured taxonomy that may assist developers to plan suitable fault repairing strategies. Some commonly emphasized fault recovery strategies are also discussed. Some challenges that may occur during fault handling of SBSs are also mentioned

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Understanding the Patterns of Microservice Intercommunication From A Developer Perspective

    Get PDF
    Microservices Architecture is the modern paradigm for designing software. Based on the divide-and-conquer strategy, microservices architecture organizes the application by furnishing it with a fine-level granularity. Each microservice has a well-defined responsibility and multiple microservices communicate with each other toward a common goal. A momentous decision in designing microservices applications is the choice between orchestration or choreography-based modes as the underlying intercommunication pattern. Choreography entails that microservices work autonomously while orchestration entails that a central coordinator directs the interaction between services. We arbitrate this decision from a developer?s perspective by empirically evaluating the properties of a benchmark system mapped into both orchestration and choreographed topologies. In this research, we document our experience from implementing and debugging this system. Our studies demonstrate microservices composed using orchestration exhibit desirable inherent characteristics that make microservice code easier to implement, debug, and scale

    Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid

    Get PDF
    The demand on computational and storage resources is growing along with the amount of infor- mation that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid perfor- mance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and ser- viceability of the computing platform can be measured according to the constant and stable provisioning of the services. Due to the widely used concept of the service oriented architecture (SOA) for large computing facilities, in advance knowing of the service state as well as the quick and accurate detection of its disability allows to perform the proactive management of the com- puting facility. The proactive management is considered as a core component of the computing facility management automation concept, such as Autonomic Computing. Thus in time as well as in advance and accurate identification of the provided service status can be considered as a contribution to the computing facility management automation, which is directly related to the provisioning of the stable and reliable computing resources. Based on the case studies, performed using the GoeGrid monitoring data, consideration of the approaches as generalized methods for the accurate and fast identification and prediction of the service status is reasonable. Simplicity and low consumption of the computing resources allow to consider the methods in the scope of the Autonomic Computing component

    Human-Centered Automation for Resilience in Acquiring Construction Field Information

    Get PDF
    abstract: Resilient acquisition of timely, detailed job site information plays a pivotal role in maintaining the productivity and safety of construction projects that have busy schedules, dynamic workspaces, and unexpected events. In the field, construction information acquisition often involves three types of activities including sensor-based inspection, manual inspection, and communication. Human interventions play critical roles in these three types of field information acquisition activities. A resilient information acquisition system is needed for safer and more productive construction. The use of various automation technologies could help improve human performance by proactively providing the needed knowledge of using equipment, improve the situation awareness in multi-person collaborations, and reduce the mental workload of operators and inspectors. Unfortunately, limited studies consider human factors in automation techniques for construction field information acquisition. Fully utilization of the automation techniques requires a systematical synthesis of the interactions between human, tasks, and construction workspace to reduce the complexity of information acquisition tasks so that human can finish these tasks with reliability. Overall, such a synthesis of human factors in field data collection and analysis is paving the path towards “Human-Centered Automation” (HCA) in construction management. HCA could form a computational framework that supports resilient field data collection considering human factors and unexpected events on dynamic job sites. This dissertation presented an HCA framework for resilient construction field information acquisition and results of examining three HCA approaches that support three use cases of construction field data collection and analysis. The first HCA approach is an automated data collection planning method that can assist 3D laser scan planning of construction inspectors to achieve comprehensive and efficient data collection. The second HCA approach is a Bayesian model-based approach that automatically aggregates the common sense of people from the internet to identify job site risks from a large number of job site pictures. The third HCA approach is an automatic communication protocol optimization approach that maximizes the team situation awareness of construction workers and leads to the early detection of workflow delays and critical path changes. Data collection and simulation experiments extensively validate these three HCA approaches.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    16th SC@RUG 2019 proceedings 2018-2019

    Get PDF
    • …
    corecore