334 research outputs found

    Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach

    Get PDF
    The last years have been characterized by the arising of highly distributed computing platforms composed of a heterogeneity of computing and communication resources including centralized high-performance computing architectures (e.g. clusters or large shared-memory machines), as well as multi-/many-core components also integrated into mobile nodes and network facilities. The emerging of computational paradigms such as Grid and Cloud Computing, provides potential solutions to integrate such platforms with data systems, natural phenomena simulations, knowledge discovery and decision support systems responding to a dynamic demand of remote computing and communication resources and services. In this context time-critical applications, notably emergency management systems, are composed of complex sets of application components specialized for executing specific computations, which are able to cooperate in such a way as to perform a global goal in a distributed manner. Since the last years the scientific community has been involved in facing with the programming issues of distributed systems, aimed at the definition of applications featuring an increasing complexity in the number of distributed components, in the spatial distribution and cooperation between interested parties and in their degree of heterogeneity. Over the last decade the research trend in distributed computing has been focused on a crucial objective. The wide-ranging composition of distributed platforms in terms of different classes of computing nodes and network technologies, the strong diffusion of applications that require real-time elaborations and online compute-intensive processing as in the case of emergency management systems, lead to a pronounced tendency of systems towards properties like self-managing, self-organization, self-controlling and strictly speaking adaptivity. Adaptivity implies the development, deployment, execution and management of applications that, in general, are dynamic in nature. Dynamicity concerns the number and the specific identification of cooperating components, the deployment and composition of the most suitable versions of software components on processing and networking resources and services, i.e., both the quantity and the quality of the application components to achieve the needed Quality of Service (QoS). In time-critical applications the QoS specification can dynamically vary during the execution, according to the user intentions and the Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach Gabriele Mencagli and Marco Vanneschi Department of Computer Science, University of Pisa, L. Bruno Pontecorvo, Pisa Italy 2 2 Will-be-set-by-IN-TECH information produced by sensors and services, as well as according to the monitored state and performance of networks and nodes. The general reference point for this kind of systems is the Grid paradigm which, by definition, aims to enable the access, selection and aggregation of a variety of distributed and heterogeneous resources and services. However, though notable advancements have been achieved in recent years, current Grid technology is not yet able to supply the needed software tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability and performance, interoperability, as well as fault tolerance and security, of the emerging applications. For this reason in this chapter we will study a methodology for designing high-performance computations able to exploit the heterogeneity and dynamicity of distributed environments by expressing adaptivity and QoS-awareness directly at the application level. An effective approach needs to address issues like QoS predictability of different application configurations as well as the predictability of reconfiguration costs. Moreover adaptation strategies need to be developed assuring properties like the stability degree of a reconfiguration decision and the execution optimality (i.e. select reconfigurations accounting proper trade-offs among different QoS objectives). In this chapter we will present the basic points of a novel approach that lays the foundations for future programming model environments for time-critical applications such as emergency management systems. The organization of this chapter is the following. In Section 2 we will compare the existing research works for developing adaptive systems in critical environments, highlighting their drawbacks and inefficiencies. In Section 3, in order to clarify the application scenarios that we are considering, we will present an emergency management system in which the run-time selection of proper application configuration parameters is of great importance for meeting the desired QoS constraints. In Section 4we will describe the basic points of our approach in terms of how compute-intensive operations can be programmed, how they can be dynamically modified and how adaptation strategies can be expressed. In Section 5 our approach will be contextualize to the definition of an adaptive parallel module, which is a building block for composing complex and distributed adaptive computations. Finally in Section 6 we will describe a set of experimental results that show the viability of our approach and in Section 7 we will give the concluding remarks of this chapter

    Parallel Patterns for Adaptive Data Stream Processing

    Get PDF
    In recent years our ability to produce information has been growing steadily, driven by an ever increasing computing power, communication rates, hardware and software sensors diffusion. This data is often available in the form of continuous streams and the ability to gather and analyze it to extract insights and detect patterns is a valuable opportunity for many businesses and scientific applications. The topic of Data Stream Processing (DaSP) is a recent and highly active research area dealing with the processing of this streaming data. The development of DaSP applications poses several challenges, from efficient algorithms for the computation to programming and runtime systems to support their execution. In this thesis two main problems will be tackled: * need for high performance: high throughput and low latency are critical requirements for DaSP problems. Applications necessitate taking advantage of parallel hardware and distributed systems, such as multi/manycores or cluster of multicores, in an effective way; * dynamicity: due to their long running nature (24hr/7d), DaSP applications are affected by highly variable arrival rates and changes in their workload characteristics. Adaptivity is a fundamental feature in this context: applications must be able to autonomously scale the used resources to accommodate dynamic requirements and workload while maintaining the desired Quality of Service (QoS) in a cost-effective manner. In the current approaches to the development of DaSP applications are still missing efficient exploitation of intra-operator parallelism as well as adaptations strategies with well known properties of stability, QoS assurance and cost awareness. These are the gaps that this research work tries to fill, resorting to well know approaches such as Structured Parallel Programming and Control Theoretic models. The dissertation runs along these two directions. The first part deals with intra-operator parallelism. A DaSP application can be naturally expressed as a set of operators (i.e. intermediate computations) that cooperate to reach a common goal. If QoS requirements are not met by the current implementation, bottleneck operators must be internally parallelized. We will study recurrent computations in window based stateful operators and propose patterns for their parallel implementation. Windowed operators are the most representative class of stateful data stream operators. Here computations are applied on the most recent received data. Windows are dynamic data structures: they evolve over time in terms of content and, possibly, size. Therefore, with respect to traditional patterns, the DaSP domain requires proper specializations and enhanced features concerning data distribution and management policies for different windowing methods. A structured approach to the problem will reduce the effort and complexity of parallel programming. In addition, it simplifies the reasoning about the performance properties of a parallel solution (e.g. throughput and latency). The proposed patterns exhibit different properties in terms of applicability and profitability that will be discussed and experimentally evaluated. The second part of the thesis is devoted to the proposal and study of predictive strategies and reconfiguration mechanisms for autonomic DaSP operators. Reconfiguration activities can be implemented in a transparent way to the application programmer thanks to the exploitation of parallel paradigms with well known structures. Furthermore, adaptation strategies may take advantage of the QoS predictability of the used parallel solution. Autonomous operators will be driven by means of a Model Predictive Control approach, with the intent of giving QoS assurances in terms of throughput or latency in a resource-aware manner. An experimental section will show the effectiveness of the proposed approach in terms of execution costs reduction as well as the stability degree of a system reconfiguration. The experiments will target shared and distributed memory architectures

    Improving stability in Adaptive Distributed Parallel applications: a cooperative predictive approach

    Get PDF
    With this thesis we take a step further on improving reconfiguration decisions in adaptive distributed parallel computations. The concept of switching cost is introduced with the aim of reducing the amount of reconfigurations and of improving the reconfigurations stability in dynamic execution scenar- ios. Computation modules control is based on the Model-Based Predictive Control (MPC) approach. We study the effectiveness of this approach in parallel distributed computations, where each module cooperates to find global optimal reconfiguration trajectory. Experimental results are obtained by means of experiments performed in a simulation environment

    A Control-Theoretic Methodology for Adaptive Structured Parallel Computations

    Get PDF
    Adaptivity for distributed parallel applications is an essential feature whose impor- tance has been assessed in many research fields (e.g. scientific computations, large- scale real-time simulation systems and emergency management applications). Especially for high-performance computing, this feature is of special interest in order to properly and promptly respond to time-varying QoS requirements, to react to uncontrollable environ- mental effects influencing the underlying execution platform and to efficiently deal with highly irregular parallel problems. In this scenario the Structured Parallel Programming paradigm is a cornerstone for expressing adaptive parallel programs: the high-degree of composability of parallelization schemes, their QoS predictability formally expressed by performance models, are basic tools in order to introduce dynamic reconfiguration processes of adaptive applications. These reconfigurations are not only limited to imple- mentation aspects (e.g. parallelism degree modifications), but also parallel versions with different structures can be expressed for the same computation, featuring different levels of performance, memory utilization, energy consumption, and exploitation of the memory hierarchies. Over the last decade several programming models and research frameworks have been developed aimed at the definition of tools and strategies for expressing adaptive parallel applications. Notwithstanding this notable research effort, properties like the optimal- ity of the application execution and the stability of control decisions are not sufficiently studied in the existing work. For this reason this thesis exploits a pioneer research in the context of providing formal theoretical tools founded on Control Theory and Game Theory techniques. Based on these approaches, we introduce a formal model for control- ling distributed parallel applications represented by computational graphs of structured parallelism schemes (also called skeleton-based parallelism). Starting out from the performance predictability of structured parallelism schemes, in this thesis we provide a formalization of the concept of adaptive parallel module per- forming structured parallel computations. The module behavior is described in terms of a Hybrid System abstraction and reconfigurations are driven by a Predictive Control ap- proach. Experimental results show the effectiveness of this work, in terms of execution cost reduction as well as the stability degree of a system reconfiguration: i.e. how long a reconfiguration choice is useful for targeting the required QoS levels. This thesis also faces with the issue of controlling large-scale distributed applications composed of several interacting adaptive components. After a panoramic view of the existing control-theoretic approaches (e.g. based on decentralized, distributed or hierar- chical structures of controllers), we introduce a methodology for the distributed predictive control. For controlling computational graphs, the overall control problem consists in a set of coupled control sub-problems for each application module. The decomposition is- sue has a twofold nature: first of all we need to model the coupling relationships between control sub-problems, furthermore we need to introduce proper notions of negotiation and convergence in the control decisions collectively taken by the parallel modules of the application graph. This thesis provides a formalization through basic concepts of Non-cooperative Games and Cooperative Optimization. In the notable context of the dis- tributed control of performance and resource utilization, we exploit a formal description of the control problem providing results for equilibrium point existence and the compari- son of the control optimality with different adaptation strategies and interaction protocols. Discussions and a first validation of the proposed techniques are exploited through exper- iments performed in a simulation environment

    Hierarchical and Frequency-Aware Model Predictive Control for Bare-Metal Cloud Applications

    Get PDF
    Bare-metal cloud provides a dedicated set of physical machines (PMs) and enables both PMs and virtual machines (VMs) on the PMs to be scaled in/out dynamically. However, to increase efficiency of the resources and reduce violations of service level agreements (SLAs), resources need to be scaled quickly to adapt to workload changes, which results in high reconfiguration overhead, especially for the PMs. This paper proposes a hierarchical and frequency-aware auto-scaling based on Model Predictive Control, which enable us to achieve an optimal balance between resource efficiency and overhead. Moreover, when performing high-frequency resource control, the proposed technique improves the timing of reconfigurations for the PMs without increasing the number of them, while it increases the reallocations for the VMs to adjust the redundant capacity among the applications; this process improves the resource efficiency. Through trace-based numerical simulations, we demonstrate that when the control frequency is increased to 16 times per hour, the VM insufficiency causing SLA violations is reduced to a minimum of 0.1% per application without increasing the VM pool capacity

    Model-Based Dynamic Resource Management for Service Oriented Clouds

    Get PDF
    Cloud computing is a flexible platform for software as a service, as more and more applications are deployed on cloud. Major challenges in cloud include how to characterize the workload of the applications and how to manage the cloud resources efficiently by sharing them among many applications. The current state of the art considers a simplified model of the system, either ignoring the software components altogether or ignoring the relationship between individual software services. This thesis considers the following resource management problems for cloud-based service providers: (i) how to estimate the parameters of the current workload, (ii) how to meet Quality of Service (QoS) targets while minimizing infrastructure cost, (iii) how to allocate resources considering performance costs of virtual machine reconfigurations. To address the above problems, we propose a model-based feedback loop approach. The cloud infrastructure, the services, and the applications are modelled using Layered Queuing Models (LQM). These models are then optimized. Mathematical techniques are used to reduce the complexity of the models and address the scalability issues. The main contributions of this thesis are: (i) Extended Kalman Filter (EKF) based techniques improved by dynamic clustering for scalable estimation of workload parameters, (ii) combination of adaptive empirical models (tuned during runtime) and stepwise optimizations for improving the overall allocation performance, (iii) dynamic service placement algorithms that consider the cost of virtual machine reconfiguration

    A decentralized control and optimization framework for autonomic performance management of web-server systems

    Get PDF
    Web-based services such as online banking and e-commerce are often hosted on distributed computing systems comprising heterogeneous and networked servers in a data-center setting. To operate such systems efficiently while satisfying stringent quality-of-service (QoS) requirements, multiple performance-related parameters must be dynamically tuned to track changing operating conditions. For example, the workload to be processed may be time varying and hardware/software resources may fail during system operation. To cope with their growing scale and complexity, such computing systems must become largely autonomic, capable of being managed with minimal human intervention.This study develops a distributed cooperative-control framework using concepts from optimal control theory and hybrid dynamical systems to adaptively manage the performance of computer clusters operating in dynamic and uncertain environments. As case studies, we focus on power management and dynamic resource provisioning problems in such clusters.First, we apply the control framework to minimize the power consumed by a server cluster under a time-varying workload. The overall power-management problem is decomposed into smaller sub-problems and solved in cooperative fashion by individual controllers on each server. This approach allows for the scalable control of large computing systems. The control framework also adapts to controller failures and allows for the dynamic addition and removal of controllers during system operation. We validate the proposed approach using a discrete-event simulator with real-world workload traces, and our results indicate that the controllers achieve a 55% reduction in power consumption when compared to an uncontrolled system in which each server operates at its maximum frequency at all times.We then develop a distributed resource provisioning framework to achieve di®erentiated QoS among multiple online services using concepts from hybrid control. We use a discrete hybrid automaton to model the operation of the computing cluster. The resource provisioning problem combining both QoS control and power management is then solved using a decentralized model predictive controller to maximize the operating profits generated by the cluster according to a specified service level agreement. Simulation results indicate that the controller generates 27% additional profit when compared to an uncontrolled system.Ph.D., Electrical Engineering -- Drexel University, 200

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Competing goals draw attention to effort, which then enters cost-benefit computations as input

    Get PDF
    Different to Kurzban et al., we conceptualize the experience of mental effort as the subjective costs of goal pursuit (i.e., the amount of invested resources relative to the amount of available resources). Rather than being an output of computations that compare costs and benefits of the target and competing goals, effort enters these computations as an inpu
    • …
    corecore