11,187 research outputs found

    Dynamic CPU management for real-time, middleware-based systems

    Get PDF
    Journal ArticleMany real-world distributed, real-time, embedded (DRE) systems, such as multi-agent military applications, are built using commercially available operating systems, middleware, and collections of pre-existing software. The complexity of these systems makes it difficult to ensure that they maintain high quality of service (QOS). At design time, the challenge is to introduce coordinated QOS controls into multiple software elements in a non-invasive manner. At run time, the system must adapt dynamically to maintain high QOS in the face of both expected events, such as application mode changes, and unexpected events, such as resource demands from other applications. In this paper we describe the design and implementation of a CPU Broker for these types of DRE systems. The CPU Broker mediates between multiple real-time tasks and the facilities of a real-time operating system: using feedback and other inputs, it adjusts allocations over time to ensure that high application-level QOS is maintained. The broker connects to its monitored tasks in a non-invasive manner, is based on and integrated with industry-standard middleware, and implements an open architecture for new CPU management policies. Moreover, these features allow the broker to be easily combined with other QOS mechanisms and policies, as part of an overall end-to-end QOS management system. We describe our experience in applying the CPU Broker to a simulated DRE military system. Our results show that the broker connects to the system transparently and allows it to function in the face of run-time CPU resource contention

    Dynamic CPU management for real-time, middleware-based systems

    Get PDF
    technical reportMany real-world distributed, real-time, embedded (DRE) systems, such as multi-agent military applications, are built using commercially available operating systems, middleware, and collections of pre-existing software. The complexity of these systems makes it difficult to ensure that they maintain high quality of service (QoS). At design time, the challenge is to introduce coordinated QoS controls into multiple software elements in a non-invasive manner. At run time, the system must adapt dynamically to maintain high QoS in the face of both expected events, such as application mode changes, and unexpected events, such as resource demands from other applications. In this paper we describe the design and implementation of a CPU Broker for these types of DRE systems. The CPU Broker mediates between multiple real-time tasks and the facilities of a real-time operating system: using feedback and other inputs, it adjusts allocations over time to ensure that high application-level QoS is maintained. The broker connects to its monitored tasks in a non-invasive manner, is based on and integrated with industry-standard middleware, and implements an open architecture for new CPU management policies. Moreover, these features allow the broker to be easily combined with other QoS mechanisms and policies, as part of an overall end-to-end QoS management system. We describe our experience in applying the CPU Broker to a simulated DRE military system. Our results show that the broker connects to the system transparently and allows it to function in the face of run-time CPU resource contention

    Conclusions from the European Roadmap on Control of Computing Systems

    Get PDF
    The use of control-based methods for resource management in real-time computing and communication systems has gained a substantial interest recently. Applications areas include performance control of web-servers, dynamic resource management in embedded systems, traffic control in communication networks, transaction management in database servers, error control in software systems, and autonomic computing. Within the European EU/IST FP6 Network of Exellence ARTIST2 on Embedded System Design a roadmap on Control of Real-Time Computing Systems has recently been completed. The focus of the roadmap is how flexibility, adaptivity, performance and robustness can be achieved in a real-time computing or communication system through the use of control theory. The item that is controlled is in most cases the allocation of computing and communication resources, e.g., the distribution or scheduling of CPU time among different competing tasks, jobs, requests, or transactions, or the communication resources in a network. Due to this, control of computing systems also goes under the name of feedback scheduling. The roadmap is divided into six research areas: control of server systems, control of CPU resources, control of communication networks, error control of software systems, feedback scheduling of control systems, and control middleware. For each area an overview is given and challenges for future research are stated. The aim of this position paper is to summarize the conclusions concerning these research challenges. In this paper, we will only cover the first four of the areas above. A preliminary version of the roadmap can be found on http://www.control.lth.se/user/karlerik/roadmap1.pd

    Context-aware adaptation in DySCAS

    Get PDF
    DySCAS is a dynamically self-configuring middleware for automotive control systems. The addition of autonomic, context-aware dynamic configuration to automotive control systems brings a potential for a wide range of benefits in terms of robustness, flexibility, upgrading etc. However, the automotive systems represent a particularly challenging domain for the deployment of autonomics concepts, having a combination of real-time performance constraints, severe resource limitations, safety-critical aspects and cost pressures. For these reasons current systems are statically configured. This paper describes the dynamic run-time configuration aspects of DySCAS and focuses on the extent to which context-aware adaptation has been achieved in DySCAS, and the ways in which the various design and implementation challenges are met

    Design and Implementation of a Distributed Middleware for Parallel Execution of Legacy Enterprise Applications

    Get PDF
    A typical enterprise uses a local area network of computers to perform its business. During the off-working hours, the computational capacities of these networked computers are underused or unused. In order to utilize this computational capacity an application has to be recoded to exploit concurrency inherent in a computation which is clearly not possible for legacy applications without any source code. This thesis presents the design an implementation of a distributed middleware which can automatically execute a legacy application on multiple networked computers by parallelizing it. This middleware runs multiple copies of the binary executable code in parallel on different hosts in the network. It wraps up the binary executable code of the legacy application in order to capture the kernel level data access system calls and perform them distributively over multiple computers in a safe and conflict free manner. The middleware also incorporates a dynamic scheduling technique to execute the target application in minimum time by scavenging the available CPU cycles of the hosts in the network. This dynamic scheduling also supports the CPU availability of the hosts to change over time and properly reschedule the replicas performing the computation to minimize the execution time. A prototype implementation of this middleware has been developed as a proof of concept of the design. This implementation has been evaluated with a few typical case studies and the test results confirm that the middleware works as expected

    Fast dynamic deployment adaptation for mobile devices

    Get PDF
    Mobile devices that are limited in terms of CPU power, memory or battery power are only capable of executing simple applications. To be able to run advanced applications we introduce a framework to split up the application and execute parts on a remote server. In order to dynamically adapt the deployment at runtime, techniques are presented to keep the migration time as low as possible and to prevent performance loss while migrating. Also methods are presented and evaluated to cope with applications generating a variable load, which can lead to an unstable system
    • …
    corecore