1,035 research outputs found
Predictable migration and communication in the Quest-V multikernal
Quest-V is a system we have been developing from the ground up, with objectives focusing on safety, predictability and efficiency. It is designed to work on emerging multicore processors with hardware virtualization support. Quest-V is implemented as a ``distributed system on a chip'' and comprises multiple sandbox kernels. Sandbox kernels are isolated from one another in separate regions of physical memory, having access to a subset of processing cores and I/O devices. This partitioning prevents system failures in one sandbox affecting the operation of other sandboxes. Shared memory channels managed by system monitors enable inter-sandbox communication.
The distributed nature of Quest-V means each sandbox has a separate physical clock, with all event timings being managed by per-core local timers. Each sandbox is responsible for its own scheduling and I/O management, without requiring intervention of a hypervisor. In this paper, we formulate bounds on inter-sandbox communication in the absence of a global scheduler or global system clock. We also describe how address space migration between sandboxes can be guaranteed without violating service constraints. Experimental results on a working system show the conditions under which Quest-V performs real-time communication and migration.National Science Foundation (1117025
A Novel Approach to Multiagent based Scheduling for Multicore Architecture
In a Multicore architecture, eachpackage consists of large number of processors. Thisincrease in processor core brings new evolution inparallel computing. Besides enormous performanceenhancement, this multicore package injects lot ofchallenges and opportunities on the operating systemscheduling point of view. We know that multiagentsystem is concerned with the development andanalysis of optimization problems. The main objectiveof multiagent system is to invent some methodologiesthat make the developer to build complex systems thatcan be used to solve sophisticated problems. This isdifficult for an individual agent to solve. In this paperwe combine the AMAS theory of multiagent systemwith the scheduler of operating system to develop anew process scheduling algorithm for multicorearchitecture. This multiagent based schedulingalgorithm promises in minimizing the average waitingtime of the processes in the centralized queue and alsoreduces the task of the scheduler. We actuallymodified and simulated the linux 2.6.11 kernel processscheduler to incorporate the multiagent systemconcept. The comparison is made for different numberof cores with multiple combinations of process and theresults are shown for average waiting time Vs numberof cores in the centralized queue
A survey of techniques for reducing interference in real-time applications on multicore platforms
This survey reviews the scientific literature on techniques for reducing interference in real-time multicore systems, focusing on the approaches proposed between 2015 and 2020. It also presents proposals that use interference reduction techniques without considering the predictability issue. The survey highlights interference sources and categorizes proposals from the perspective of the shared resource. It covers techniques for reducing contentions in main memory, cache memory, a memory bus, and the integration of interference effects into schedulability analysis. Every section contains an overview of each proposal and an assessment of its advantages and disadvantages.This work was supported in part by the Comunidad de Madrid Government "Nuevas TĂ©cnicas de Desarrollo de Software de Tiempo Real Embarcado Para Plataformas. MPSoC de PrĂłxima GeneraciĂłn" under Grant IND2019/TIC-17261
Perf&Fair: A Progress-Aware Scheduler to Enhance Performance and Fairness in SMT Multicores
[EN] Nowadays, high performance multicore processors implement
multithreading capabilities. The processes running concurrently on these
processors are continuously competing for the shared resources, not only among
cores, but also within the core. While resource sharing increases the resource
utilization, the interference among processes accessing the shared resources can
strongly affect the performance of individual processes and its predictability. In this
scenario, process scheduling plays a key role to deal with performance and
fairness. In this work we present a process scheduler for SMT multicores that
simultaneously addresses both performance and fairness. This is a major design
issue since scheduling for only one of the two targets tends to damage the other.
To address performance, the scheduler tackles bandwidth contention at the L1
cache and main memory. To deal with fairness, the scheduler estimates the
progress experienced by the processes, and gives priority to the processes with
lower accumulated progress. Experimental results on an Intel Xeon E5645
featuring six dual-threaded SMT cores show that the proposed scheduler improves
both performance and fairness over two state-of-the-art schedulers and the Linux
OS scheduler. Compared to Linux, unfairness is reduced to a half while still
improving performance by 5.6 percent.We thank the anonymous reviewers for their constructive and insightful feedback. This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and Plan E funds, under grants TIN2015-66972-C5-1-R and TIN2014-62246EXP, and by the Intel Early Career Faculty Honor Program Award.Feliu-PĂ©rez, J.; Sahuquillo Borrás, J.; Petit MartĂ, SV.; Duato MarĂn, JF. (2017). Perf&Fair: A Progress-Aware Scheduler to Enhance Performance and Fairness in SMT Multicores. IEEE Transactions on Computers. 66(5):905-911. https://doi.org/10.1109/TC.2016.2620977S90591166
Improving early design stage timing modeling in multicore based real-time systems
This paper presents a modelling approach for the timing behavior of real-time embedded systems (RTES) in early design phases. The model focuses on multicore processors - accepted as the next computing platform for RTES - and in particular it predicts the contention tasks suffer in the access to multicore on-chip shared resources. The model
presents the key properties of not requiring the application's source code or binary and having high-accuracy and low overhead. The former is of paramount importance in those common scenarios in which several software suppliers work in parallel implementing different applications for a system integrator, subject to different intellectual property (IP) constraints. Our model helps reducing the risk of exceeding the assigned budgets for each application in late design
stages and its associated costs.This work has received funding from the European Space
Agency under Project Reference AO=17722=13=NL=LvH,
and has also been supported by the Spanish Ministry of
Science and Innovation grant TIN2015-65316-P. Jaume Abella
has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft
Parallelism-Aware Memory Interference Delay Analysis for COTS Multicore Systems
In modern Commercial Off-The-Shelf (COTS) multicore systems, each core can
generate many parallel memory requests at a time. The processing of these
parallel requests in the DRAM controller greatly affects the memory
interference delay experienced by running tasks on the platform. In this paper,
we model a modern COTS multicore system which has a nonblocking last-level
cache (LLC) and a DRAM controller that prioritizes reads over writes. To
minimize interference, we focus on LLC and DRAM bank partitioned systems. Based
on the model, we propose an analysis that computes a safe upper bound for the
worst-case memory interference delay. We validated our analysis on a real COTS
multicore platform with a set of carefully designed synthetic benchmarks as
well as SPEC2006 benchmarks. Evaluation results show that our analysis is more
accurately capture the worst-case memory interference delay and provides safer
upper bounds compared to a recently proposed analysis which significantly
under-estimate the delay.Comment: Technical Repor
- …