119 research outputs found
Capacity sharing and stealing in serverbased real-time systems
A dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers is proposed.
Overloads are handled by an efficient reclaiming of residual capacities originated by early completions as well as by allowing
reserved capacity stealing of non-guaranteed bandwidth servers. The proposed dynamic budget accounting mechanism
ensures that at a particular time the currently executing server is using a residual capacity, its own capacity or is stealing
some reserved capacity, eliminating the need of additional server states or unbounded queues. The server to which the
budget accounting is going to be performed is dynamically determined at the time instant when a capacity is needed. This
paper describes and evaluates the proposed scheduling algorithm, showing that it can efficiently reduce the mean tardiness
of periodic jobs. The achieved results become even more significant when tasks’ computation times have a large variance
The capacity exchange protocol
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming
no precise information on critical sections and computation times is available. The concept of bandwidth inheritance
is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the
degree of deviation from the ideal system’s behaviour caused by inter-application blocking.
The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open
real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This
loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms
the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations.
A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are
discussed
From simulation to statistical analysis: timeliness assessment of ethernet/IP-based distributed systems
A number of characteristics are boosting the eagerness of extending Ethernet to also
cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and
priority-based switching, bandwidth availability, just to mention a few, are characteristics
upon which that eagerness is building up. But, will Ethernet technologies really manage to
replace traditional Fieldbus networks? Ethernet technology, by itself, does not include
features above the lower layers of the OSI communication model. In the past few years, it is
particularly significant the considerable amount of work that has been devoted to the timing
analysis of Ethernet-based technologies. It happens, however, that the majority of those
works are restricted to the analysis of sub-sets of the overall computing and communication
system, thus without addressing timeliness at a holistic level.
To this end, we are addressing a few inter-linked research topics with the purpose of
setting a framework for the development of tools suitable to extract temporal properties of
Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This
framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we
reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of
statistical analysis techniques to provide usable results. Discrete event simulation models of
a distributed system can be a powerful tool for the timeliness evaluation of the overall
system, but particular care must be taken with the results provided by traditional statistical
analysis techniques
A survey of techniques for reducing interference in real-time applications on multicore platforms
This survey reviews the scientific literature on techniques for reducing interference in real-time multicore systems, focusing on the approaches proposed between 2015 and 2020. It also presents proposals that use interference reduction techniques without considering the predictability issue. The survey highlights interference sources and categorizes proposals from the perspective of the shared resource. It covers techniques for reducing contentions in main memory, cache memory, a memory bus, and the integration of interference effects into schedulability analysis. Every section contains an overview of each proposal and an assessment of its advantages and disadvantages.This work was supported in part by the Comunidad de Madrid Government "Nuevas TĂ©cnicas de Desarrollo de Software de Tiempo Real Embarcado Para Plataformas. MPSoC de PrĂłxima GeneraciĂłn" under Grant IND2019/TIC-17261
Challenges in real-time virtualization and predictable cloud computing
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future
The COMQUAD Component Container Architecture and Contract Negotiation
Component-based applications require runtime support to be able to guarantee non-functional properties. This report proposes an architecture for a real-time-capable, component-based runtime environment, which allows to separate non-functional and functional concerns in component-based software development. The architecture is presented with particular focus on three key issues: the conceptual architecture, an approach including implementation issues for splitting the runtime environment into a real-time-capable and a real-time-incapable part, and details of contract negotiation. The latter includes selecting component implementations for instantiantion based on their non-functional properties
A Survey of Research into Mixed Criticality Systems
This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards
Scheduling Self-Suspending Tasks: New and Old Results
In computing systems, a job may suspend itself (before it finishes its execution) when it has to wait for certain results from other (usually external) activities. For real-time systems, such self-suspension behavior has been shown to induce performance degradation. Hence, the researchers in the real-time systems community have devoted themselves to the design and analysis of scheduling algorithms that can alleviate the performance penalty due to self-suspension behavior. As self-suspension and delegation of parts of a job to non-bottleneck resources is pretty natural in many applications, researchers in the operations research (OR) community have also explored scheduling algorithms for systems with such suspension behavior, called the master-slave problem in the OR community.
This paper first reviews the results for the master-slave problem in the OR literature and explains their impact on several long-standing problems for scheduling self-suspending real-time tasks. For frame-based periodic real-time tasks, in which the periods of all tasks are identical and all jobs related to one frame are released synchronously, we explore different approximation metrics with respect to resource augmentation factors under different scenarios for both uniprocessor and multiprocessor systems, and demonstrate that different approximation metrics can create different levels of difficulty for the approximation. Our experimental results show that such more carefully designed schedules can significantly outperform the state-of-the-art
- …