130,525 research outputs found
Utilization-Based Scheduling of Flexible Mixed-Criticality Real-Time Tasks
Mixed-criticality models are an emerging paradigm for the design of real-time
systems because of their significantly improved resource efficiency. However,
formal mixed-criticality models have traditionally been characterized by two
impractical assumptions: once \textit{any} high-criticality task overruns,
\textit{all} low-criticality tasks are suspended and \textit{all other}
high-criticality tasks are assumed to exhibit high-criticality behaviors at the
same time. In this paper, we propose a more realistic mixed-criticality model,
called the flexible mixed-criticality (FMC) model, in which these two issues
are addressed in a combined manner. In this new model, only the overrun task
itself is assumed to exhibit high-criticality behavior, while other
high-criticality tasks remain in the same mode as before. The guaranteed
service levels of low-criticality tasks are gracefully degraded with the
overruns of high-criticality tasks. We derive a utilization-based technique to
analyze the schedulability of this new mixed-criticality model under EDF-VD
scheduling. During runtime, the proposed test condition serves an important
criterion for dynamic service level tuning, by means of which the maximum
available execution budget for low-criticality tasks can be directly determined
with minimal overhead while guaranteeing mixed-criticality schedulability.
Experiments demonstrate the effectiveness of the FMC scheme compared with
state-of-the-art techniques.Comment: This paper has been submitted to IEEE Transaction on Computers (TC)
on Sept-09th-201
A Lazy Bailout Approach for Dual-Criticality Systems on Uniprocessor Platforms
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to the criticality of tasks, the scheduling of tasks with different criticalities is called mixed-criticality scheduling. In this paper we present the Lazy Bailout Protocol (LBP), a mixed-criticality scheduling method where low-criticality jobs overrunning their time budget cannot threaten the timeliness of high-criticality jobs while at the same time the method tries to complete as many low-criticality jobs as possible. The key principle of LBP is instead of immediately abandoning low-criticality jobs when a high-criticality job overruns its optimistic WCET estimate, to put them in a low-priority queue for later execution. To compare mixed-criticality scheduling methods we introduce a formal quality criterion for mixed-criticality scheduling, which, above all else, compares schedulability of high-criticality jobs and only afterwards the schedulability of low-criticality jobs. Based on this criterion we prove that LBP behaves better than the original {\em Bailout Protocol} (BP). We show that LBP can be further improved by slack time exploitation and by gain time collection at runtime, resulting in LBPSG. We also show that these improvements of LBP perform better than the analogous improvements based on BP.Peer reviewedFinal Published versio
Mapping Self-Organized Criticality onto Criticality
We present a general conceptual framework for self-organized criticality
(SOC), based on the recognition that it is nothing but the expression,
''unfolded'' in a suitable parameter space, of an underlying {\em unstable}
dynamical critical point. More precisely, SOC is shown to result from the
tuning of the {\em order parameter} to a vanishingly small, but {\em positive}
value, thus ensuring that the corresponding control parameter lies exactly at
its critical value for the underlying transition. This clarifies the role and
nature of the {\em very slow driving rate} common to all systems exhibiting
SOC. This mechanism is shown to apply to models of sandpiles, earthquakes,
depinning, fractal growth and forest-fires, which have been proposed as
examples of SOC.Comment: 17 pages tota
Adaptation to criticality through organizational invariance in embodied agents
Many biological and cognitive systems do not operate deep within one or other
regime of activity. Instead, they are poised at critical points located at
phase transitions in their parameter space. The pervasiveness of criticality
suggests that there may be general principles inducing this behaviour, yet
there is no well-founded theory for understanding how criticality is generated
at a wide span of levels and contexts. In order to explore how criticality
might emerge from general adaptive mechanisms, we propose a simple learning
rule that maintains an internal organizational structure from a specific family
of systems at criticality. We implement the mechanism in artificial embodied
agents controlled by a neural network maintaining a correlation structure
randomly sampled from an Ising model at critical temperature. Agents are
evaluated in two classical reinforcement learning scenarios: the Mountain Car
and the Acrobot double pendulum. In both cases the neural controller appears to
reach a point of criticality, which coincides with a transition point between
two regimes of the agent's behaviour. These results suggest that adaptation to
criticality could be used as a general adaptive mechanism in some
circumstances, providing an alternative explanation for the pervasive presence
of criticality in biological and cognitive systems.Comment: arXiv admin note: substantial text overlap with arXiv:1704.0525
- …