142 research outputs found

    Stochastic models for dependable services

    Get PDF
    In this paper we investigate the use of stochastic models for analysing service-oriented systems. We propose an iterative hybrid approach using system measurements, testbed observations as well as formal models to derive a quantitative model of service-based systems that allows us to evaluate the effectiveness of the restart method in such systems. In cases where one is fortunate enough as to have access to a real system for measurements the obtained data often is lacking statistical significance or knowledge of the system is not sufficient to explain the data. A testbed may then be preferable as it allows for long experiment series and provides full control of the system's configuration. In order to provide meaningful data the testbed must be equipped with fault-injection using a suitable fault-model and an appropriate load model. We fit phase-type distributions to the data obtained from the testbed in order to represent the observed data in a model that can be used e.g. as a service process in a queueing model of our service-oriented system. The queueing model may be used to analyse different restart policies, buffer size or service disciplines. Results from the model can be fed into the testbed and provide it with better fault and load models thus closing the modelling loop

    Application Heartbeats for Software Performance and Health

    Get PDF
    Adaptive, or self-aware, computing has been proposed as one method to help application programmers confront the growing complexity of multicore software development. However, existing approaches to adaptive systems are largely ad hoc and often do not manage to incorporate the true performance goals of the applications they are designed to support. This paper presents an enabling technology for adaptive computing systems: Application Heartbeats. The Application Heartbeats framework provides a simple, standard programming interface that applications can use to indicate their performance and system software (and hardware) can use to query an applicationâ s performance. Several experiments demonstrate the simplicity and efficacy of the Application Heartbeat approach. First the PARSEC benchmark suite is instrumented with Application Heartbeats to show the broad applicability of the interface. Then, an adaptive H.264 encoder is developed to show how applications might use Application Heartbeats internally. Next, an external resource scheduler is developed which assigns cores to an application based on its performance as specified with Application Heartbeats. Finally, the adaptive H.264 encoder is used to illustrate how Application Heartbeats can aid fault tolerance

    Concurrent query analytics on distributed graph systems

    Get PDF
    Large-scale graph problems, such as shortest path finding or social media graph evaluations, are an important area in computer science. In recent years, important graph applications such as PowerGraph or PowerLyra lead to a shift of paradigms of distributed graph processing systems towards processing of multiple parallel queries rather than a single global graph algorithm. Queries usually have locality in graphs, i.e. involve only a subset of the graphs vertices. Suitable partitioning and query synchronization approaches can minimize communication overhead and query latency by exploiting this locality. Additionally, partitioning algorithms must be dynamic as the number and locality of queries can change over time. Existing graph processing systems are not optimized to exploit query locality or to adapt graph partitioning at runtime. In this thesis we present Q-Graph, an open source, multitenant graph analytics system with dynamic graph repartitioning. Q-Graph's query-aware partitioning algorithm Q-Cut performs adaptive graph partitioning at runtime. Compared to static partitioning strategies, Q-Cut can exploit runtime knowledge about query locality and workload to improve partitioning dynamically. Furthermore a case study with an implementation for the shortest path problem and point search queries is presented. We present evaluations showing the performance of Q-Graph and the effectiveness of Q-Cut. Measurements show that Q-Cut improves query processing performance by up to 60% and automatically adapts partitioning on changing query workload and locality, outperforming partitioning methods using domain knowledge.Large-scale Graph Probleme, wie beispielsweise Kürzeste-Wege Suchen oder Social Media Evaluationen, sind ein wichtiger Bereich in der Informatik. In den letzten Jahren zeigen Graph Anwendungen wie PowerGraph oder PowerLyra einen Paradigmenwechsel von verteilten Graph Systemen hin zu parallelen Anfragen, statt der Verarbeitung einzelner, globaler Anfragen. Solche Anfragen besitzen üblicherweise eine Lokalität in der Graph Datenstruktur, d.h. sie betreffen nur einen Teilbereich der Knoten eines Graphs. Geeignete Ansätze zur Partitionierung können dies nutzen um den Kommunikationsaufwand zu reduzieren und die Anfragenlatenz zu minimieren. Außerdem müssen Partitionierungs Algorithmen dynamisch sein, da sich die Anzahl und Lokalität von Anfragen über die Zeit ändern kann. Existierende Graph Systeme sind nicht optimiert um Anfragen Lokalität zu berücksichtigen oder die Graph Partitionierung zur Laufzeit anzupassen. In dieser Arbeit stellen wir Q-Graph vor, ein Open Source Graph System zur Verarbeitung nebenläufiger Anfragen und dynamischer Graph Partitionierung. Q-Graphs anfragenbasierter Partitionierungs Algorithmus Q-Cut kann die Partitionierung zur Laufzeit anpassen. Im Vergleich zu statischen Partitionierungen können hierbei Laufzeitinformationen über Anfragen Lokalität und Arbeitslast einbezogen werden. Außerdem wird eine Implementierung für das Kürzeste-Wege Problem vorgestellt. Evaluationen zeigen die Leistungsfähigkeit von Q-Graph und die Effektivität von Q-Cut. Messungen zeigen, dass Q-Cut die Ausführungszeit von Anfragen um bis zu 60% verbessern kannund in der Lage ist, die Partitionierung an sich verändernde Anfragen Lokalität und Arbeitslast anzupassen. Q-Cut übertrifft dabei Methoden welche Domänenwissen zur Partitioninerung verwenden

    SEEC: A Framework for Self-aware Computing

    Get PDF
    As the complexity of computing systems increases, application programmers must be experts in their application domain and have the systems knowledge required to address the problems that arise from parallelism, power, energy, and reliability concerns. One approach to relieving this burden is to make use of self-aware computing systems, which automatically adjust their behavior to help applications achieve their goals. This paper presents the SEEC framework, a unified computational model designed to enable self-aware computing in both applications and system software. In the SEEC model, applications specify goals, system software specifies possible actions, and the SEEC framework is responsible for deciding how to use the available actions to meet the application-specified goals. The SEEC framework is built around a general and extensible control system which provides predictable behavior and allows SEEC to make decisions that achieve goals while optimizing resource utilization. To demonstrate the applicability of the SEEC framework, this paper presents fivedifferent self-aware systems built using SEEC. Case studies demonstrate how these systems can control the performance of the PARSEC benchmarks, optimize performance per Watt for a video encoder, and respond to unexpected changes in the underlying environment. In general these studies demonstrate that systems built using the SEEC framework are goal-oriented, predictable, adaptive, and extensible

    A Multi-Agent Architecture for An Intelligent Web-Based Educational System

    Get PDF
    An intelligent educational system must constitute an adaptive system built on multi-agent system architecture. The multi-agent architecture component provides self-organization, self-direction, and other control functionalities that are crucially important for an educational system. On the other hand, the adaptiveness of the system is necessary to provide customization, diversification, and interactional functionalities. Therefore, an educational system architecture that integrates multi-agent functionality [50] with adaptiveness can offer the learner the required independent learning experience. An educational system architecture is a complex structure with an intricate hierarchal organization where the functional components of the system undergo sophisticated and unpredictable internal interactions to perform its function. Hence, the system architecture must constitute adaptive and autonomous agents differentiated according to their functions, called multi-agent systems (MASs). The research paper proposes an adaptive hierarchal multi-agent educational system (AHMAES) [51] as an alternative to the traditional education delivery method. The document explains the various architectural characteristics of an adaptive multi-agent educational system and critically analyzes the system’s factors for software quality attributes

    Performance Observability and Monitoring of High Performance Computing with Microservices

    Get PDF
    Traditionally, High Performance Computing (HPC) softwarehas been built and deployed as bulk-synchronous, parallel executables based on the message-passing interface (MPI) programming model. The rise of data-oriented computing paradigms and an explosion in the variety of applications that need to be supported on HPC platforms have forced a re-think of the appropriate programming and execution models to integrate this new functionality. In situ workflows demarcate a paradigm shift in HPC software development methodologies enabling a range of new applications --- from user-level data services to machine learning (ML) workflows that run alongside traditional scientific simulations. By tracing the evolution of HPC software developmentover the past 30 years, this dissertation identifies the key elements and trends responsible for the emergence of coupled, distributed, in situ workflows. This dissertation's focus is on coupled in situ workflows involving composable, high-performance microservices. After outlining the motivation to enable performance observability of these services and why existing HPC performance tools and techniques can not be applied in this context, this dissertation proposes a solution wherein a set of techniques gathers, analyzes, and orients performance data from different sources to generate observability. By leveraging microservice components initially designed to build high performance data services, this dissertation demonstrates their broader applicability for building and deploying performance monitoring and visualization as services within an in situ workflow. The results from this dissertation suggest that: (1) integration of performance data from different sources is vital to understanding the performance of service components, (2) the in situ (online) analysis of this performance data is needed to enable the adaptivity of distributed components and manage monitoring data volume, (3) statistical modeling combined with performance observations can help generate better service configurations, and (4) services are a promising architecture choice for deploying in situ performance monitoring and visualization functionality. This dissertation includes previously published and co-authored material and unpublished co-authored material

    A Quality-Driven Approach to Enable Decision-Making in Self-Adaptive Software

    Get PDF
    Self-adaptive software systems are increasingly in demand. The driving forces are changes in the software “self” and “context”, particularly in distributed and pervasive applications. These systems provide self-* properties in order to keep requirements satisfied in different situations. Engineering self-adaptive software normally involves building the adaptable software and the adaptation manager. This PhD thesis focuses on the latter, especially on the design and implementation of the deciding process in an adaptation manager. For this purpose, a Quality-driven Framework for Engineering an Adaptation Manager (QFeam) is proposed, in which quality requirements play a key role as adaptation goals. Two major phases of QFeam are building the runtime adaptation model and designing the adaptation mechanism. The modeling phase investigates eliciting and specifying key entities of the adaptation problem space including goals, attributes, and actions. Three composition patterns are discussed to link these entities to build the adaptation model, namely: goal-centric, attribute-action-coupling, and hybrid patterns. In the second phase, the adaptation mechanism is designed according to the adopted pattern in the model. Therefore, three categories of mechanisms are discussed, in which the novel goal-ensemble mechanism is introduced. A concrete model and mechanism, the Goal-Attribute-Action Model (GAAM), is proposed based on the goal-centric pattern and the goal-ensemble mechanism. GAAM is implemented based on the StarMX framework for Java-based systems. Several considerations are taken into account in QFeam: i) the separation of adaptation knowledge from application knowledge, ii) highlighting the role of adaptation goals, and iii) modularity and reusability. Among these, emphasizing goals is the tenet of QFeam, especially in order to address the challenge of addressing several self- * properties in the adaptation manager. Furthermore, QFeam aims at embedding a model in the adaptation manager, particularly in the goal-centric and hybrid patterns. The proposed framework focuses on mission-critical systems including enterprise and service-oriented applications. Several empirical studies were conducted to put QFeam into practice, and also evaluate GAAM in comparison with other adaptation models and mechanisms. Three case studies were selected for this purpose: the TPC-W bookstore application, a news application, and the CC2 VoIP call controller. Several research questions were set for each case study, and findings indicate that the goal-ensemble mechanism and GAAM can outperform or work as well as a common rule-based approach. The notable difference is that the effort of building an adaptation manager based on a goal-centric pattern is less than building it using an attribute-action-coupling pattern. Moreover, representing goals explicitly leads to better scalability and understandability of the adaptation manager. Overall, the experience of working on these three systems show that QFeam improves the design and development process of the adaptation manager, particularly by highlighting the role of adaptation goals

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain
    • …
    corecore