100 research outputs found

    Managing Dynamic Enterprise and Urgent Workloads on Clouds Using Layered Queuing and Historical Performance Models

    No full text
    The automatic allocation of enterprise workload to resources can be enhanced by being able to make what-if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment. Using this we define, implement and experimentally investigate the effectiveness of a prediction-based cloud workload and resource management algorithm. Based on these experimental analyses we: i.) comparatively evaluate the layered queuing and historical techniques; ii.) evaluate the effectiveness of the management algorithm in different operating scenarios; and iii.) provide guidance on using prediction-based workload and resource management

    Design and Validation of an Analytical Model to Evaluate Monitoring Frameworks Limits

    Get PDF
    International audienceIt is essential that a monitoring system is being designed with performance and scalability in mind. But due to the diversity and complexity of both the monitoring and the monitored systems, it is currently difficult to reason on both performance and scalability using ad hoc techniques. Thus, both simulation is required and analytical models based on well established techniques such as queueing theory have to be developed. In this paper we provide an analytical modeling of the behaviour of the commonly used manager-agent monitoring frameworks within two scenarios: single manager-single agent and single manager-multiple agents. The two designed models enable the automation of the estimation of the scalability limit of the two types of management monitoring schemes regarding a performance metric like the monitoring delay. We validate our developed models through simulation based on parameters values obtained from the performance measurement of a life running JMX-based monitoring applications with the two scenarios

    A control theory foundation for self-managing computing systems

    Full text link

    Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Get PDF
    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried out reliably with such existing capabilities and (3) the currently unavailable modeling capabilities that should receive high priority for near-term research and development. It should be emphasized that the study is concerned only with the class of 'fast time' analytical and simulation models. 'Real time' models, that typically involve humans-in-the-loop, comprise another extensive class which is not addressed in this report. However, the relationship between some of the fast-time models reviewed and a few well-known real-time models is identified in several parts of this report and the potential benefits from the combined use of these two classes of models-a very important subject-are discussed in chapters 4 and 7

    A Control-Theoretic Methodology for Adaptive Structured Parallel Computations

    Get PDF
    Adaptivity for distributed parallel applications is an essential feature whose impor- tance has been assessed in many research fields (e.g. scientific computations, large- scale real-time simulation systems and emergency management applications). Especially for high-performance computing, this feature is of special interest in order to properly and promptly respond to time-varying QoS requirements, to react to uncontrollable environ- mental effects influencing the underlying execution platform and to efficiently deal with highly irregular parallel problems. In this scenario the Structured Parallel Programming paradigm is a cornerstone for expressing adaptive parallel programs: the high-degree of composability of parallelization schemes, their QoS predictability formally expressed by performance models, are basic tools in order to introduce dynamic reconfiguration processes of adaptive applications. These reconfigurations are not only limited to imple- mentation aspects (e.g. parallelism degree modifications), but also parallel versions with different structures can be expressed for the same computation, featuring different levels of performance, memory utilization, energy consumption, and exploitation of the memory hierarchies. Over the last decade several programming models and research frameworks have been developed aimed at the definition of tools and strategies for expressing adaptive parallel applications. Notwithstanding this notable research effort, properties like the optimal- ity of the application execution and the stability of control decisions are not sufficiently studied in the existing work. For this reason this thesis exploits a pioneer research in the context of providing formal theoretical tools founded on Control Theory and Game Theory techniques. Based on these approaches, we introduce a formal model for control- ling distributed parallel applications represented by computational graphs of structured parallelism schemes (also called skeleton-based parallelism). Starting out from the performance predictability of structured parallelism schemes, in this thesis we provide a formalization of the concept of adaptive parallel module per- forming structured parallel computations. The module behavior is described in terms of a Hybrid System abstraction and reconfigurations are driven by a Predictive Control ap- proach. Experimental results show the effectiveness of this work, in terms of execution cost reduction as well as the stability degree of a system reconfiguration: i.e. how long a reconfiguration choice is useful for targeting the required QoS levels. This thesis also faces with the issue of controlling large-scale distributed applications composed of several interacting adaptive components. After a panoramic view of the existing control-theoretic approaches (e.g. based on decentralized, distributed or hierar- chical structures of controllers), we introduce a methodology for the distributed predictive control. For controlling computational graphs, the overall control problem consists in a set of coupled control sub-problems for each application module. The decomposition is- sue has a twofold nature: first of all we need to model the coupling relationships between control sub-problems, furthermore we need to introduce proper notions of negotiation and convergence in the control decisions collectively taken by the parallel modules of the application graph. This thesis provides a formalization through basic concepts of Non-cooperative Games and Cooperative Optimization. In the notable context of the dis- tributed control of performance and resource utilization, we exploit a formal description of the control problem providing results for equilibrium point existence and the compari- son of the control optimality with different adaptation strategies and interaction protocols. Discussions and a first validation of the proposed techniques are exploited through exper- iments performed in a simulation environment

    Simulation modelling of spatial problems

    Get PDF
    The thesis presents a simulation modelling strategy for spatial problems which uses a data structure based on spatial relationships. Using this network based approach, two domain specific data-driven models are developed in which the movement of people is modelled as a quasi-continuous process. The development of simulation modelling technology is examined to find reasons why there should be a reluctance to use the technique. With particular reference to problems which are spatially related, the established simulation modelling techniques, together with their diagrammatic representations, are evaluated for their helpfulness at the model building stage. Using a specimen example, it is demonstrated that the commonly used approaches for digital discrete event simulation, which use a procedural paradigm, give little help with problems which involve the allocation of a resource and have spatial constraints. Two domain specific generic models are demonstrated which adopt an object-oriented approach, for which the model description, including the logical constraints, are given in the data-file. A method for modelling the movement of people at different levels of congestion as a quasi-continuous process is validated using results from reported surveys of people's movement rates and direct observations, and this is applied in both models. The first models the emergency evacuation of a building, using a graph structure to represent the spatial components. This is implemented using object-oriented code and test runs are compared with evacuation times from a building at the University of North London. The second provides an experimental tool for comparing the effect upon ward function of different layouts and was influenced by a published survey of a nurse activity analysis carried out in fourteen different wards. The nurse activity model uses two graph structures and an object class to model the nurses who move, with reference to, and informed by, the spatial graph structure. The successful application of the method in the two problem domains confirms its potential usefulness for spatial problems

    Implementing reusable solvers : an object-oriented framework for operations research algorithms

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 1998.Includes bibliographical references (p. 325-338) and indexes.by John Douglas Ruark.Ph.D
    • …
    corecore