196 research outputs found
The capacity exchange protocol
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming
no precise information on critical sections and computation times is available. The concept of bandwidth inheritance
is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the
degree of deviation from the ideal system’s behaviour caused by inter-application blocking.
The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open
real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This
loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms
the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations.
A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are
discussed
Timing analysis of optimised code
Timing analysis is a crucial test for dependable hard real-time systems (DHRTS). The calculation of the worst-case execution time (WCET) is mandatory. As modern compilers are capable to produce small and efficient code, software development for DHRTS today is mostly done in high-level languages instead of assembly code. Execution path information available at source code (flow facts) therefore have to be transformed correctly in accordance with code optimisations by the compiler to allow safe and precise WCET analysis. In this paper we present a framework based on abstract interpretation to perform this mandatory transformation of flow facts. Conventional WCET analysis approaches use this information to analyse the object code
Turning Futexes Inside-Out: Efficient and Deterministic User Space Synchronization Primitives for Real-Time Systems with IPCP
In Linux and other operating systems, futexes (fast user space mutexes) are the underlying synchronization primitives to implement POSIX synchronization mechanisms, such as blocking mutexes, condition variables, and semaphores. Futexes allow one to implement mutexes with excellent performance by avoiding system calls in the fast path. However, futexes are fundamentally limited to synchronization mechanisms that are expressible as atomic operations on 32-bit variables. At operating system kernel level, futex implementations require complex mechanisms to look up internal wait queues making them susceptible to determinism issues. In this paper, we present an alternative design for futexes by completely moving the complexity of wait queue management from the operating system kernel into user space, i. e. we turn futexes "inside out". The enabling mechanisms for "inside-out futexes" are an efficient implementation of the immediate priority ceiling protocol (IPCP) to achieve non-preemptive critical sections in user space, spinlocks for mutual exclusion, and interwoven services to suspend or wake up threads. The design allows us to implement common thread synchronization mechanisms in user space and to move determinism concerns out of the kernel while keeping the performance properties of futexes. The presented approach is suitable for multi-processor real-time systems with partitioned fixed-priority (P-FP) scheduling on each processor. We evaluate the approach with an implementation for mutexes and condition variables in a real-time operating system (RTOS). Experimental results on 32-bit ARM platforms show that the approach is feasible, and overheads are driven by low-level synchronization primitives
A capacity sharing and stealing strategy for open real-time systems
This paper focuses on the scheduling of tasks with hard and soft real-time constraints
in open and dynamic real-time systems. It starts by presenting a capacity
sharing and stealing (CSS) strategy that supports the coexistence of guaranteed
and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by
making additional capacity available from two sources: (i) reclaiming unused reserved
capacity when jobs complete in less than their budgeted execution time and
(ii) stealing reserved capacity from inactive non-isolated servers used to schedule
best-effort jobs.
CSS is then combined with the concept of bandwidth inheritance to efficiently
exchange reserved bandwidth among sets of inter-dependent tasks which share resources
and exhibit precedence constraints, assuming no previous information on
critical sections and computation times is available. The proposed Capacity Exchange
Protocol (CXP) has a better performance and a lower overhead when compared
against other available solutions and introduces a novel approach to integrate
precedence constraints among tasks of open real-time systems
Evolving real-time systems using hierarchical scheduling and concurrency analysis
Journal ArticleWe have developed a new way to look at real-time and embedded software: as a collection of execution environments created by a hierarchy of schedulers. Common schedulers include those that run interrupts, bottom-half handlers, threads, and events. We have created algorithms for deriving response times, scheduling overheads, and blocking terms for tasks in systems containing multiple execution environments. We have also created task scheduler logic, a formalism that permits checking systems for race conditions and other errors. Concurrency analysis of low-level software is challenging because there are typically several kinds of locks, such as thread mutexes and disabling interrupts, and groups of cooperating tasks may need to acquire some, all, or none of the available types of locks to create correct software. Our high-level goal is to create systems that are evolvable: they are easier to modify in response to changing requirements than are systems created using traditional techniques. We have applied our approach to two case studies in evolving software for networked sensor nodes
Sustainability in static-priority restricted-migration scheduling
International audienceIn this paper, we focus on the static-priority scheduling of periodic hard real-time tasks upon identical multiprocessor platforms. In order to bound the inter-processor migrations, we consider the restricted-migration scheduling policy for which a task is allowed to migrate only at job boundaries. Several jobs of the same task can then be assigned on different processors but a given job can not migrate. It has been shown that this scheduling policy can suffer from scheduling anomalies. These anomalies occur when a decrease in execution requirement of a job causes a deadline miss. We present a static-priority restricted-migration scheduling algorithm and we prove it does not suffer from these anomalies. We also review the scheduling anomalies according to the scheduling tests for this algorithm
A few what-ifs on using statistical analysis of stochastic simulation runs to extract timeliness properties
Modern real-time systems, with a more flexible and adaptive
nature, demand approaches for timeliness evaluation
based on probabilistic measures of meeting deadlines. In
this context, simulation can emerge as an adequate solution
to understand and analyze the timing behaviour of actual
systems. However, care must be taken with the obtained
outputs under the penalty of obtaining results with lack of
credibility. Particularly important is to consider that we are
more interested in values from the tail of a probability distribution
(near worst-case probabilities), instead of deriving
confidence on mean values. We approach this subject by
considering the random nature of simulation output data.
We will start by discussing well known approaches for estimating
distributions out of simulation output, and the confidence
which can be applied to its mean values. This is
the basis for a discussion on the applicability of such approaches
to derive confidence on the tail of distributions,
where the worst-case is expected to be
- …