6 research outputs found

    Energy-Efficient Concurrency Control for Dynamic-Priority Real-Time Tasks with Abortable Critical Sections

    Get PDF
    In this paper, we are interested in energy-efficient concurrency control for real-time tasks on a non-ideal DVS processor. Based on well-known ceiling-based concurrency control protocols (such as priority ceiling protocol (PCP) and stack resource policy (SRP)), researchers have proposed energy-efficient approaches to mange concurrent accesses to shared resources so that the energy consumption can be reduced. However, ceiling-based protocols have a problem of ceiling blocking which imposes a great impact on the performance of real-time systems. In order to achieve sufficient performance, we propose a new protocol, called conditional abortable stack resource policy (CA-SRP), to resolve the ceiling blocking problem for dynamic-priority real-time tasks by incorporating a conditional abort rule into SRP. Based on the schedulability analysis of CA-SRP, we also propose a method, called dynamic speed assignment (DSA), to dynamically calculate and assign proper processor speeds for task execution so that the energy consumption can be reduced further. The capabilities of our proposed CA-SRP and DSA have been evaluated by a series of experiments, for which we have encouraging results

    The stack resource protocol based on real time transactions

    Get PDF
    Current hard real time (HRT) kernels have their timely behaviour guaranteed at the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where one can profit by a better and more flexible use of the resources. It is shown that one can improve the flexibility and efficiency of real time kernels and a method is proposed for precise quality of service schedulability analysis of the stack resource protocol. This protocol is generalised by introducing real time transactions, which makes its use straightforward and efficient. Transactions can be refined to nested critical sections if the smallest estimation of blocking is desired. The method can be used for hard real time systems in general and for multimedia systems in particular

    Replication of non-deterministic objects

    Get PDF
    This thesis discusses replication of non-deterministic objects in distributed systems to achieve fault tolerance against crash failures. The objects replicated are the virtual nodes of a distributed application. Replication is viewed as an issue that is to be dealt with only during the configuration of a distributed application and that should not affect the development of the application. Hence, replication of virtual nodes should be transparent to the application. Like all measures to achieve fault tolerance, replication introduces redundancy in the system. Not surprisingly, the main difficulty is guaranteeing the consistency of all replicas such that they behave in the same way as if the object was not replicated (replication transparency). This is further complicated if active objects (like virtual nodes) are replicated, and these objects themselves can be clients of still further objects in the distributed application. The problems of replication of active non-deterministic objects are analyzed in the context of distributed Ada 95 applications. The ISO standard for Ada 95 defines a model for distributed execution based on remote procedure calls (RPC). Virtual nodes in Ada 95 use this as their sole communication paradigm, but they may contain tasks to execute activities concurrently, thus making the execution potentially non-deterministic due to implicit timing dependencies. Such non-determinism cannot be avoided by choosing deterministic tasking policies. I present two different approaches to maintain replica consistency despite this non-determinism. In a first approach, I consider the run-time support of Ada 95 as a black box (except for the part handling remote communications). This corresponds to a non-deterministic computation model. I show that replication of non-deterministic virtual nodes requires that remote procedure calls are implemented as nested transactions. Unfortunately, effects of failures are not local to the replicas of a virtual node: when a failure occurs, nested remote calls made to other virtual nodes must be undone. Also, using transactional semantics for RPCs necessitates a compromise regarding transparency: the application must identify global state for it cannot be determined reliably in an automatic way. Further study reveals that this approach cannot be implemented in a transparent way at all because the consistency criterion of Ada 95 (linearizability) is much weaker than that of transactions (serializability). An execution of remote procedure calls as transactions may thus lead to incompatibilities with the semantics of the programming language. If remotely called subprograms on a replicated virtual node perform partial operations, i.e., entry calls on global protected objects, deadlocks that cannot be broken can occur in certain cases. Such deadlocks do not occur when the virtual node is not replicated. The transactional semantics of RPCs must therefore be exposed to the application. A second approach is based on a piecewise deterministic computation model, i.e., the execution of a virtual node is seen as a sequence of deterministic state intervals. Whenever a non-deterministic event occurs, a new state interval is started. I study replica organization under this computation model (semi-active replication). In this model, all non-deterministic decisions are made on one distinguished replica (the leader), while all other replicas (the followers) are forced to follow the same sequence of non-deterministic events. I show that it suffices to synchronize the followers with the leader upon each observable event, i.e., when the leader sends a message to some other virtual node. It is not necessary to synchronize upon each and every non-deterministic event — which would incur a prohibitively high overhead. Non-deterministic events occurring on the leader between observable events are logged and sent to the followers just before the leader executes an observable event. Consequently, it is guaranteed that the followers will reach the same state as the leader, and thus the effects of failures remain mostly local to the replicas. A prototype implementation called RAPIDS (Replicated Ada Partitions In Distributed Systems) serves as a proof of concept for this second approach, demonstrating its feasibility. RAPIDS is an Ada 95 implementation of a replication manager for semi-active replication for the GNAT development system for Ada 95. It is entirely contained within the run-time support and hence largely transparent for the application

    Interaction-aware analysis and optimization of real-time application and operating system

    Get PDF
    Mechanical and electronic automation was a key component of the technological advances in the last two hundred years. With the use of special-purpose machines, manual labor was replaced by mechanical motion, leaving workers with the operation of these machines, before also this task was conquered by embedded control systems. With the advances of general-purpose computing, the development of these control systems shifted more and more from a problem-specific one to a one-size-fits-all mentality as the trade-off between per-instance overheads and development costs was in favor of flexible and reusable implementations. However, with a scaling factor of thousands, if not millions, of deployed devices, overheads and inefficiencies accumulate; calling for a higher degree of specialization. For the area real-time operating systems (RTOSs), which form the base layer for many of these computerized control systems, we deploy way more flexibility than what is actually required for the applications that run on top of it. Since only the solution, but not the problem, became less specific to the control problem at hand, we have the chance to cut away inefficiencies, improve on system-analyses results, and optimize the resource consumption. However, such a tailoring will only be favorable if it can be performed without much developer interaction and in an automated fashion. Here, real-time systems are a good starting point, since we already have to have a large degree of static knowledge in order to guarantee their timeliness. Until now, this static nature is not exploited to its full extent and optimization potentials are left unused. The requirements of a system, with regard to the RTOS, manifest in the interactions between the application and the kernel. Threads request resources from the RTOS, which in return determines and enforces a scheduling order that will ensure the timely completion of all necessary computations. Since the RTOS runs only in the exception, its reaction to requests from the application (or from the environment) is its defining feature. In this thesis, I will grasp these interactions, and thereby the required RTOS semantic, in a control-flow-sensitive fashion. Extracted automatically, this knowledge about the reciprocal influence allows me to fit the implementation of a system closer to its actual requirements. The result is a system that is not only in its usage a special-purpose system, but also in its implementation and in its provided guarantees. In the development of my approach, it became clear that the focus on these interactions is not only highly fruitful for the optimization of a system, but also for its end-to-end analysis. Therefore, this thesis does not only provide methods to reduce the kernel-execution overhead and a system's memory consumption, but it also includes methods to calculate tighter response-time bounds and to give guarantees about the correct behavior of the kernel. All these contributions are enabled by my proposed interaction-aware methodology that takes the whole system, RTOS and application, into account. With this thesis, I show that a control-flow-sensitive whole-system view on the interactions is feasible and highly rewarding. With this approach, we can overcome many inefficiencies that arise from analyses that have an isolating focus on individual system components. Furthermore, the interaction-aware methods keep close to the actual implementation, and therefore are able to consider the behavioral patterns of the finally deployed real-time computing system

    A framework for flexible scheduling in real-time middleware

    Get PDF
    The traditional vehicle for the deployment of a real-time system has been a real-time operating system (RTOS). In recent years another programming approach has increasingly found its way into the real-time systems domain: the use of middleware. Examples are the so called pervasive systems (embedded, interactive but not mobile), and ubiquitous systems (embedded, interactive and mobile), e.g. hand-held devices. These tend to be dynamic systems that often exhibit a need for flexible scheduling because of their operating requirement; or their execution environment. Thus, today there is a true need in many realtime applications for more flexible scheduling than what is currently the stateof- prac'tice. By flexible scheduling we mean the ability of the program execution platform to provide a range of scheduling policies, all the way from hard real-time to soft real-time policies, from which an application can choose one most suited to its needs. Furthermore, some applications may need to be scheduled by one policy while others may need a different policy, e.g. fi'Ced priority or earliest deadline first (EDF) for hard real-time tasks, least slack time first (LST) or shortest remaining time for soft real-time tasks. It would be difficult for the middleware to expect this functionality from the RTOS. This would require a fine balance to be struck in the RTOS between flexibility and usability, and many years will probably pass until such approaches become mainstream and usable. 'This thesis maintains that this flexibility can be introduced into the middleware. It presents a viable solution to introducing flexible scheduling in real-time program execution middleware in the form of a flexible scheduling framework. Such a framework allows use of the same program execution middleware for a variety of applications - soft, firm and hard. In particular, the framework allows different scheduling policies to co-exist in the system and their tasks to share common resources. The thesis describes tlle framework's protocol, examines the different types of scheduling policies that can be supported, tests its correctness through the use of a model checker and evaluates the proposed framework by measuring its execution cost overhead. The framework is deemed appropriate for the types of real-time applications that need the services of flexible scheduling
    corecore