1,110 research outputs found
Mapping AADL models to a repository of multiple schedulability analysis techniques
To fill the gap between the modeling of real-time systems and the scheduling analysis, we propose a framework that supports seamlessly the two aspects: 1) modeling a system using a methodology, in our case study, the Architecture Analysis and Design Language (AADL), and 2) helping to easily check temporal requirements (schedulability analysis, worst-case response time, sensitivity analysis, etc.). We introduce an intermediate framework called MoSaRT, which supports a rich semantic concerning temporal analysis. We show with a case study how the input model is transformed into a MoSaRT model, and how our framework is able to generate the proper models as inputs to several classic temporal analysis tools
A Lazy Bailout Approach for Dual-Criticality Systems on Uniprocessor Platforms
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to the criticality of tasks, the scheduling of tasks with different criticalities is called mixed-criticality scheduling. In this paper we present the Lazy Bailout Protocol (LBP), a mixed-criticality scheduling method where low-criticality jobs overrunning their time budget cannot threaten the timeliness of high-criticality jobs while at the same time the method tries to complete as many low-criticality jobs as possible. The key principle of LBP is instead of immediately abandoning low-criticality jobs when a high-criticality job overruns its optimistic WCET estimate, to put them in a low-priority queue for later execution. To compare mixed-criticality scheduling methods we introduce a formal quality criterion for mixed-criticality scheduling, which, above all else, compares schedulability of high-criticality jobs and only afterwards the schedulability of low-criticality jobs. Based on this criterion we prove that LBP behaves better than the original {\em Bailout Protocol} (BP). We show that LBP can be further improved by slack time exploitation and by gain time collection at runtime, resulting in LBPSG. We also show that these improvements of LBP perform better than the analogous improvements based on BP.Peer reviewedFinal Published versio
A static scheduling approach to enable safety-critical OpenMP applications
Parallel computation is fundamental to satisfy the performance requirements of advanced safety-critical systems. OpenMP is a good candidate to exploit the performance opportunities of parallel platforms. However, safety-critical systems are often based on static allocation strategies, whereas current OpenMP implementations are based on dynamic schedulers. This paper proposes two OpenMP-compliant static allocation approaches: an optimal but costly approach based on an ILP formulation, and a sub-optimal but tractable approach that computes a worst-case makespan bound close to the optimal one.This work is funded by the EU projects P-SOCRATES (FP7-ICT-2013-10) and HERCULES (H2020/ICT/2015/688860), and the Spanish Ministry of Science and Innovation under contract TIN2015-65316-P.Peer ReviewedPostprint (author's final draft
k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests
To deal with a large variety of workloads in different application domains in
real-time embedded systems, a number of expressive task models have been
developed. For each individual task model, researchers tend to develop
different types of techniques for deriving schedulability tests with different
computation complexity and performance. In this paper, we present a general
schedulability analysis framework, namely the k2U framework, that can be
potentially applied to analyze a large set of real-time task models under any
fixed-priority scheduling algorithm, on both uniprocessor and multiprocessor
scheduling. The key to k2U is a k-point effective schedulability test, which
can be viewed as a "blackbox" interface. For any task model, if a corresponding
k-point effective schedulability test can be constructed, then a sufficient
utilization-based test can be automatically derived. We show the generality of
k2U by applying it to different task models, which results in new and improved
tests compared to the state-of-the-art.
Analogously, a similar concept by testing only k points with a different
formulation has been studied by us in another framework, called k2Q, which
provides quadratic bounds or utilization bounds based on a different
formulation of schedulability test. With the quadratic and hyperbolic forms,
k2Q and k2U frameworks can be used to provide many quantitive features to be
measured, like the total utilization bounds, speed-up factors, etc., not only
for uniprocessor scheduling but also for multiprocessor scheduling. These
frameworks can be viewed as a "blackbox" interface for schedulability tests and
response-time analysis
Scheduling policies and system software architectures for mixed-criticality computing
Mixed-criticality model of computation is being increasingly
adopted in timing-sensitive systems. The model not only
ensures that the most critical tasks in a system never fails,
but also aims for better systems resource utilization in normal condition. In this report, we describe the widely used
mixed-criticality task model and fixed-priority scheduling
algorithms for the model in uniprocessors. Because of the
necessity by the mixed-criticality task model and scheduling
policies, isolation, both temporal and spatial, among tasks is
one of the main requirements from the system design point
of view. Different virtualization techniques have been used
to design system software architecture with the goal of isolation. We discuss such a few system software architectures
which are being and can be used for mixed-criticality model
of computation
Utilization-Based Scheduling of Flexible Mixed-Criticality Real-Time Tasks
Mixed-criticality models are an emerging paradigm for the design of real-time
systems because of their significantly improved resource efficiency. However,
formal mixed-criticality models have traditionally been characterized by two
impractical assumptions: once \textit{any} high-criticality task overruns,
\textit{all} low-criticality tasks are suspended and \textit{all other}
high-criticality tasks are assumed to exhibit high-criticality behaviors at the
same time. In this paper, we propose a more realistic mixed-criticality model,
called the flexible mixed-criticality (FMC) model, in which these two issues
are addressed in a combined manner. In this new model, only the overrun task
itself is assumed to exhibit high-criticality behavior, while other
high-criticality tasks remain in the same mode as before. The guaranteed
service levels of low-criticality tasks are gracefully degraded with the
overruns of high-criticality tasks. We derive a utilization-based technique to
analyze the schedulability of this new mixed-criticality model under EDF-VD
scheduling. During runtime, the proposed test condition serves an important
criterion for dynamic service level tuning, by means of which the maximum
available execution budget for low-criticality tasks can be directly determined
with minimal overhead while guaranteeing mixed-criticality schedulability.
Experiments demonstrate the effectiveness of the FMC scheme compared with
state-of-the-art techniques.Comment: This paper has been submitted to IEEE Transaction on Computers (TC)
on Sept-09th-201
MorphoSys: efficient colocation of QoS-constrained workloads in the cloud
In hosting environments such as IaaS clouds, desirable application performance is usually guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated for unencumbered use for proper operation. Arbitrary colocation of applications with different SLAs on a single host may result in inefficient utilization of the host’s resources. In this paper, we propose that periodic resource allocation and consumption models -- often used to characterize real-time workloads -- be used for a more granular expression of SLAs. Our proposed SLA model has the salient feature that it exposes flexibilities that enable the infrastructure provider to safely transform SLAs from one form to another for the purpose of achieving more efficient colocation. Towards that goal, we present MORPHOSYS: a framework for a service that allows the manipulation of SLAs to enable efficient colocation of arbitrary workloads in a dynamic setting. We present results from extensive trace-driven simulations of colocated Video-on-Demand servers in a cloud setting. These results show that potentially-significant reduction in wasted resources (by as much as 60%) are possible using MORPHOSYS.National Science Foundation (0720604, 0735974, 0820138, 0952145, 1012798
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
- …