7 research outputs found
Capacity sharing and stealing in serverbased real-time systems
A dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers is proposed.
Overloads are handled by an efficient reclaiming of residual capacities originated by early completions as well as by allowing
reserved capacity stealing of non-guaranteed bandwidth servers. The proposed dynamic budget accounting mechanism
ensures that at a particular time the currently executing server is using a residual capacity, its own capacity or is stealing
some reserved capacity, eliminating the need of additional server states or unbounded queues. The server to which the
budget accounting is going to be performed is dynamically determined at the time instant when a capacity is needed. This
paper describes and evaluates the proposed scheduling algorithm, showing that it can efficiently reduce the mean tardiness
of periodic jobs. The achieved results become even more significant when tasks’ computation times have a large variance
The capacity exchange protocol
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming
no precise information on critical sections and computation times is available. The concept of bandwidth inheritance
is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the
degree of deviation from the ideal system’s behaviour caused by inter-application blocking.
The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open
real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This
loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms
the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations.
A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are
discussed
Cooperative framework for open real-time systems
Actualmente, os sistemas embebidos estão presentes em toda a parte. Embora grande parte da população
que os utiliza não tenha a noção da sua presença, na realidade, se repentinamente estes sistemas deixassem
de existir, a sociedade iria sentir a sua falta. A sua utilização massiva deve-se ao facto de estarem
practicamente incorporados em quase os todos dispositivos electrónicos de consumo, telecomunicações,
automação industrial e automóvel.
Influenciada por este crescimento, a comunidade cientÃfica foi confrontada com novos problemas
distribuÃdos por vários domÃnios cientÃficos, dos quais são destacados a gestão da qualidade de serviço e
gestão de recursos - domÃnio encarregue de resolver problemas relacionados com a alocação óptima de
recursos fÃsicos, tais como rede, memória e CPU.
Existe na literatura um vasto conjunto de modelos que propõem soluções para vários problemas
apresentados no contexto destes domÃnios cientÃficos. No entanto, não é possÃvel encontrar modelos
que lidem com a gestão de recursos em ambientes de execução cooperativos e abertos com restrições
temporais utilizando coligações entre diferentes nós, de forma a satisfazer os requisitos não funcionais
das aplicações.
Devido ao facto de estes sistemas serem dinâmicos por natureza, apresentam a caracterÃstica de não
ser possÃvel conhecer, a priori, a quantidade de recursos necessários que uma aplicação irá requerer do
sistema no qual irá ser executada. Este conhecimento só é adquirido aquando da execução da aplicação.
De modo a garantir uma gestão eficiente dos recursos disponÃveis, em sistemas que apresentam um
grande dinamismo na execução de tarefas com e sem restrições temporais, é necessário garantir dois
aspectos fundamentais. O primeiro está relacionado com a obtenção de garantias na execução de tarefas
de tempo-real. Estas devem sempre ser executadas dentro da janela temporal requirida. O segundo
aspecto refere a necessidade de garantir que todos os recursos necessários à execução das tarefas são
fornecidos, com o objectivo de manter os nÃveis de performance quer das aplicações, quer do próprio
sistema.
Tendo em conta os dois aspectos acima mencionados, o projecto CooperatES foi especificado com
o objectivo de permitir a dispositivos com poucos recursos uma execução colectiva de serviços com os
seus vizinhos, de modo a cumprir com as complexas restrições de qualidade de serviço impostas pelos
utilizadores ou pelas aplicações.
Decorrendo no contexto do projecto CooperatES, o trabalho resultante desta tese tem como principal
objectivo avaliar a practicabilidade dos conceitos principais propostos no âmbito do projecto. O trabalho
em causa implicou a escolha e análise de uma plataforma, a análise de requisitos, a implementação e
avaliação de uma framework que permite a execução cooperativa de aplicações e serviços que apresentem requisitos de qualidade de serviço.
Do trabalho desenvolvido resultaram as seguintes contribuições:
Análise das plataformas de código aberto que possam ser utilizadas na implementação dos conceitos
relacionados com o projecto CooperatES;
Critérios que influenciaram a escolha da plataforma Android e um estudo focado na análise da
plataforma sob uma perspectiva de sistemas de tempo-real;
Experiências na implementação dos conceitos do projecto na plataforma Android;
Avaliação da practicabilidade dos conceitos propostos no projecto CooperatES;
Proposta de extensões que permitam incorporar caracterÃsticas de sistemas de tempo real abertos
na plataforma Android.Embedded devices are reaching a point where society does not notice its presence; however, if suddenly
taken away, everyone would notice their absence. The new, small, embedded devices used in consumer
electronics, telecommunication, industrial automation, or automotive systems are the reason for their
massive spread.
Influenced by this growth and pervasiveness, the scientific community is faced with new challenges
in several domains. Of these, important ones are the management of the quality of the provided services
and the management of the underlying resources - both interconnected to solve the problem of optimal
allocation of physical resources (namely CPU, memory and network as examples), whilst providing the
best possible quality to users.
Although several models have been presented in literature, a recent proposal handles resource management
by using coalitions of nodes in open real-time cooperative environments, as a solution to guarantee
that the application’s non-functional requirements are met, and to provide the best possible quality
of service to users. This proposal, the CooperatES framework, provides better models and mechanisms to
handle resource management in open real-time systems, allowing resource constrained devices to collectively
execute services with their neighbours, in order to fulfil the complex Quality of Service constraints
imposed by users and applications.
Within the context of the CooperatES framework, the work presented in this thesis evaluates the feasibility
of the implementation of the framework’s Quality of Service concept within current embedded
Java platforms, and proposes a solution and architecture for a specific platform: the Android operating
system. To this purpose, the work provides an evaluation of the suitability of Java solutions for real-time
and embedded systems, an evaluation of the Android platform for open real-time systems, as well as discusses
the required extensions to Android allowing it to be used within real-time system. Furthermore,
this thesis presents a prototype implementation of the CooperatES framework within the Android platform,
which allows determining the suitability of the proposed platform extensions for open real-time
systems applications
Architecture multi-coeurs et temps d'exécution au pire cas
Les tâches critiques en systèmes temps-réel sont soumises à des contraintes temporelles et de correction. La validation d'un tel système repose sur l'estimation du comportement temporel au pire cas de ses tâches. Le partage de ressources, inhérent aux architectures multi-cœurs, entrave le calcul de ces estimations. Le comportement temporel d'une tâche dépend de ses rivales du fait de l'arbitrage de l'accès aux ressources ou de modifications concurrentes de leur état. Cette étude vise à l'estimation de la contribution temporelle de la hiérarchie mémoire au pire temps d'exécution de tâches critiques. Les méthodes existantes, pour caches d'instructions, sont étendues afin de supporter caches de données privés et partagés, et permettre l'analyse de hiérarchies mémoires riches. Le court-circuitage de cache est ensuite utilisé pour réduire la pression sur les caches partagés. Nous proposons à cette fin différentes heuristiques basées sur la capture de la réutilisation de blocs de cache entre différents accès mémoire. Notre seconde proposition est la politique de partitionnement Preti qui permet l'allocation d'un espace sans conflits à une tâche. Preti favorise aussi les performances de tâches non critiques concurrentes aux temps-réel dans les systèmes de criticité hybride.Critical tasks in the context of real-time systems submit to both timing and correctness constraints. Whence, the validation of a real-time system rely on the estimation of its tasks Worst case execution times. Resource sharing, as it occurs on multicore architectures, hinders the computation of such estimates. The timing behaviour of a task is impacted by its concurrents, whether because of resource access arbitration or concurrent modifications of a resource state. This study focuses on estimating the contribution of the memory hierarchy to tasks worst case execution time. Existing analysis methods, defined for instruction caches, are extended to support private and shared data caches, hence allowing for the analysis of rich memory hierarchies. Cache bypass is then used to reduce the pressure laid by concurrent tasks on shared caches levels. We propose different bypass heuristics, based on the capture of cache blocks reuse between memory accesses. Our second proposal is the Preti partitioning scheme which allows for the allocation to tasks of a cache space, free from inter-task conflicts. Preti offers the added benefit of providing for average-case performance to non-critical tasks concurrent to real-time ones on hybrid criticality systems.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF
Cache Related Pre-emption Delays in Embedded Real-Time Systems
Real-time systems are subject to stringent deadlines which make their temporal behaviour just as important as their functional behaviour. In multi-tasking real-time systems, the execution time of each task must be determined, and then combined together with information about the scheduling policy to ensure that there are enough resources to schedule all of the tasks. This is usually achieved by performing timing analysis on the individual tasks, and then schedulability analysis on the system as a whole.
In systems with cache, multiple tasks can share this common resource which can lead to cache-related pre-emption delays (CRPD) being introduced. CRPD is the additional cost incurred from resuming a pre-empted task that no longer has the instructions or data it was using in cache, because the pre-empting task(s) evicted them from cache. It is therefore important to be able to account for CRPD when performing schedulability analysis.
This thesis focuses on the effects of CRPD on a single processor system, further expanding our understanding of CRPD and ability to analyse and optimise for it. We present new CRPD analysis for Earliest Deadline First (EDF) scheduling that significantly outperforms existing analysis, and then perform the first comparison between Fixed Priority (FP) and EDF accounting for CRPD. In this comparison, we explore the effects of CRPD across a wide range of system and taskset parameters. We introduce a new task layout optimisation technique that maximises system schedulability via reduced CRPD. Finally, we extend CRPD analysis to hierarchical systems, allowing the effects of cache when scheduling multiple independent applications on a single processor to be analysed