870,560 research outputs found

    Dynamic Model-based Management of Service-Oriented Infrastructure.

    Get PDF
    Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation

    Application of a new service-oriented architecture (SOA) paradigm on the design of a crisis management distributed system

    Get PDF
    The complexity and the intensity of crisis-related situations require the use of advanced distributed systems infrastructures. In order to develop such infrastructures, specific architectures need to be applied such as Component-based Modelling, Object-Oriented, Aspect-Oriented and Service-Oriented Design. This paper focuses on the use of Service-Oriented Design techniques for the development of the ATHENA Crisis Management Distributed System. The function of the ATHENA Crisis Management Distributed System is based on the use of data generated by social media for the evaluation of the severity of the conditions of a crisis and the coordination of the appropriate measures in response to the crisis. The paper presents a new definition for Service-Oriented Architecture (SOA) and specifies the benefits that are generated by the use of this new definition in the development of the ATHENA system. Useful conclusions are also drawn in relation to how the definition considers the different technical backgrounds of users

    Collocation Games and Their Application to Distributed Resource Management

    Full text link
    We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.NSF (CCF-0820138, CSR-0720604, EFRI-0735974, CNS-0524477, CNS-052016, CCR-0635102); Universidad Pontificia Bolivariana; COLCIENCIAS–Instituto Colombiano para el Desarrollo de la Ciencia y la Tecnología "Francisco José de Caldas

    Prognostic Reasoner based adaptive power management system for a more electric aircraft

    Get PDF
    This research work presents a novel approach that addresses the concept of an adaptive power management system design and development framed in the Prognostics and Health Monitoring(PHM) perspective of an Electrical power Generation and distribution system(EPGS).PHM algorithms were developed to detect the health status of EPGS components which can accurately predict the failures and also able to calculate the Remaining Useful Life(RUL), and in many cases reconfigure for the identified system and subsystem faults. By introducing these approach on Electrical power Management system controller, we are gaining a few minutes lead time to failures with an accurate prediction horizon on critical systems and subsystems components that may introduce catastrophic secondary damages including loss of aircraft. The warning time on critical components and related system reconfiguration must permits safe return to landing as the minimum criteria and would enhance safety. A distributed architecture has been developed for the dynamic power management for electrical distribution system by which all the electrically supplied loads can be effectively controlled.A hybrid mathematical model based on the Direct-Quadrature (d-q) axis transformation of the generator have been formulated for studying various structural and parametric faults. The different failure modes were generated by injecting faults into the electrical power system using a fault injection mechanism. The data captured during these studies have been recorded to form a “Failure Database” for electrical system. A hardware in loop experimental study were carried out to validate the power management algorithm with FPGA-DSP controller. In order to meet the reliability requirements a Tri-redundant electrical power management system based on DSP and FPGA has been develope

    A Hierarchical Filtering-Based Monitoring Architecture for Large-scale Distributed Systems

    Get PDF
    On-line monitoring is essential for observing and improving the reliability and performance of large-scale distributed (LSD) systems. In an LSD environment, large numbers of events are generated by system components during their execution and interaction with external objects (e.g. users or processes). These events must be monitored to accurately determine the run-time behavior of an LSD system and to obtain status information that is required for debugging and steering applications. However, the manner in which events are generated in an LSD system is complex and represents a number of challenges for an on-line monitoring system. Correlated events axe generated concurrently and can occur at multiple locations distributed throughout the environment. This makes monitoring an intricate task and complicates the management decision process. Furthermore, the large number of entities and the geographical distribution inherent with LSD systems increases the difficulty of addressing traditional issues, such as performance bottlenecks, scalability, and application perturbation. This dissertation proposes a scalable, high-performance, dynamic, flexible and non-intrusive monitoring architecture for LSD systems. The resulting architecture detects and classifies interesting primitive and composite events and performs either a corrective or steering action. When appropriate, information is disseminated to management applications, such as reactive control and debugging tools. The monitoring architecture employs a novel hierarchical event filtering approach that distributes the monitoring load and limits event propagation. This significantly improves scalability and performance while minimizing the monitoring intrusiveness. The architecture provides dynamic monitoring capabilities through: subscription policies that enable applications developers to add, delete and modify monitoring demands on-the-fly, an adaptable configuration that accommodates environmental changes, and a programmable environment that facilitates development of self-directed monitoring tasks. Increased flexibility is achieved through a declarative and comprehensive monitoring language, a simple code instrumentation process, and automated monitoring administration. These elements substantially relieve the burden imposed by using on-line distributed monitoring systems. In addition, the monitoring system provides techniques to manage the trade-offs between various monitoring objectives. The proposed solution offers improvements over related works by presenting a comprehensive architecture that considers the requirements and implied objectives for monitoring large-scale distributed systems. This architecture is referred to as the HiFi monitoring system. To demonstrate effectiveness at debugging and steering LSD systems, the HiFi monitoring system has been implemented at the Old Dominion University for monitoring the Interactive Remote Instruction (IRI) system. The results from this case study validate that the HiFi system achieves the objectives outlined in this thesis

    EdgeFaaS: A Function-based Framework for Edge Computing

    Full text link
    The rapid growth of data generated from Internet of Things (IoTs) such as smart phones and smart home devices presents new challenges to cloud computing in transferring, storing, and processing the data. With increasingly more powerful edge devices, edge computing, on the other hand, has the potential to better responsiveness, privacy, and cost efficiency. However, resources across the cloud and edge are highly distributed and highly diverse. To address these challenges, this paper proposes EdgeFaaS, a Function-as-a-Service (FaaS) based computing framework that supports the flexible, convenient, and optimized use of distributed and heterogeneous resources across IoT, edge, and cloud systems. EdgeFaaS allows cluster resources and individual devices to be managed under the same framework and provide computational and storage resources for functions. It provides virtual function and virtual storage interfaces for consistent function management and storage management across heterogeneous compute and storage resources. It automatically optimizes the scheduling of functions and placement of data according to their performance and privacy requirements. EdgeFaaS is evaluated based on two edge workflows: video analytics workflow and federated learning workflow, both of which are representative edge applications and involve large amounts of input data generated from edge devices

    Database support of detector operation and data analysis in the DEAP-3600 Dark Matter experiment

    Full text link
    The DEAP-3600 detector searches for dark matter interactions on a 3.3 tonne liquid argon target. Over nearly a decade, from start of detector construction through the end of the data analysis phase, well over 200 scientists will have contributed to the project. The DEAP-3600 detector will amass in excess of 900 TB of data representing more than 1010^{10} particle interactions, a few of which could be from dark matter. At the same time, metadata exceeding 80 GB will be generated. This metadata is crucial for organizing and interpreting the dark matter search data and contains both structured and unstructured information. The scale of the data collected, the important role of metadata in interpreting it, the number of people involved, and the long lifetime of the project necessitate an industrialized approach to metadata management. We describe how the CouchDB and the PostgreSQL database systems were integrated into the DEAP detector operation and analysis workflows. This integration provides unified, distributed access to both structured (PostgreSQL) and unstructured (CouchDB) metadata at runtime of the data analysis software. It also supports operational and reporting requirements
    corecore