535 research outputs found
A programming-language extension for distributed real-time systems
In this paper we propose a method for extending programming languages that enables the specification of timing properties of systems. The way time is treated is not language specific and the extension can therefore be included in many existing programming languages. The presented method includes a view on the system development process. An essential feature is that it enables the construction of (hard) real-time programs that may be proven correct independently of the properties of the machines that are used for their execution. It therefore provides a similar abstraction from the execution platform as is normal for non-real-time languages. The aim of this paper is to illustrate the method and demonstrate its applicability to actual real-time problems. To this end we define a simple programming language that includes the timing extension. We present a formal semantics for a characteristic part of the language constructs and apply formal methods to prove the correctness of a small example program. We consider in detail a larger example, namely the mine-pump problem known from the literature. We construct a real-time program for this problem and describe various ways to map the program to an implementation for different platforms
Resilient architecture (preliminary version)
The main objectives of WP2 are to define a resilient architecture and to develop a range of middleware solutions (i.e. algorithms, protocols, services) for resilience to be applied in the design of highly available, reliable and trustworthy networking solutions. This is the first deliverable within this work package, a preliminary version of the resilient architecture. The deliverable builds on previous results from WP1, the definition of a set of applications and use cases, and provides a perspective of the middleware services that are considered fundamental to address the dependability requirements of those applications. Then it also describes the architectural organisation of these services, according to a number of factors like their purpose, their function within the communication stack or their criticality/specificity for resilience. WP2 proposes an architecture that differentiates between two classes of services, a class including timeliness and trustworthiness oracles, and a class of so called complex services. The resulting architecture is referred to as a "hybrid architecture". The hybrid architecture is motivated and discussed in this document. The services considered within each of the service classes of the hybrid architecture are described. This sets the background for the work to be carried on in the scope of tasks 2.2 and 2.3 of the work package. Finally, the deliverable also considers high-level interfacing aspects, by providing a discussion about the possibility of using existing Service Availability Forum standard interfaces within HIDENETS, in particular discussing possibly necessary extensions to those interfaces in order to accommodate specific HIDENETS services suited for ad-hoc domain
Secure Virtualization of Latency-Constrained Systems
Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight.
In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach.
Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism
Constructing fail-controlled nodes for distributed systems: a software approach
PhD ThesisDesigning and implementing distributed systems which continue to provide specified services
in the presence of processing site and communication failures is a difficult task. To facilitate
their development, distributed systems have been built assuming that their underlying hardware
components are Jail-controlled, i.e. present a well defined failure mode. However, if conventional
hardware cannot provide the assumed failure mode, there is a need to build processing sites
or nodes, and communication infra-structure that present the fail-controlled behaviour assumed.
Coupling a number of redundant processors within a replicated node is a well known way
of constructing fail-controlled nodes. Computation is replicated and executed simultaneously at
each processor, and by employing suitable validation techniques to the outputs generated by processors
(e.g. majority voting, comparison), outputs from faulty processors can be prevented from
appearing at the application level.
One way of constructing replicated nodes is by introducing hardwired mechanisms to
couple replicated processors with specialised validation hardware circuits. Processors are tightly
synchronised at the clock cycle level, and have their outputs validated by a reliable validation
hardware. Another approach is to use software mechanisms to perform synchronisation of processors
and validation of the outputs. The main advantage of hardware based nodes is the minimum
performance overhead incurred. However, the introduction of special circuits may increase
the complexity of the design tremendously. Further, every new microprocessor architecture requires
considerable redesign overhead. Software based nodes do not present these problems, on
the other hand, they introduce much bigger performance overheads to the system.
In this thesis we investigate alternative ways of constructing efficient fail-controlled, software
based replicated nodes. In particular, we present much more efficient order protocols, which
are necessary for the implementation of these nodes. Our protocols, unlike others published to
date, do not require processors' physical clocks to be explicitly synchronised. The main contribution
of this thesis is the precise definition of the semantics of a software based Jail-silent node,
along with its efficient design, implementation and performance evaluation.The Brazilian National Research Council (CNPq/Brasil)
Flexibilização em sistemas distribuídos: uma perspectiva holística
Doutoramento em Engenharia InformáticaEm sistemas distribuídos o paradigma utilizado para interacção entre tarefas é
a troca de mensagens. Foram propostas várias abordagens que permitem a
especificação do fluxo de dados entre tarefas, mas para sistemas de temporeal
é necessário uma definição mais rigorosa destes fluxos de dados.
Nomeadamente, tem de ser possível a especificação dos parâmetros das
tarefas e das mensagens, e a derivação dos parâmetros não especificados.
Uma tal abordagem poderia permitir o escalonamento e despacho automático
de tarefas e de mensagens, ou pelo menos, poderia reduzir o número de
iterações durante o desenho do sistema. Os fluxos de dados constituem uma
abordagem possível ao escalonamento e despacho holístico em sistemas
distribuídos de tempo-real, onde são realizadas diferentes tipos de análises
que correlacionam os vários parâmetros. Os resultados podem ser utilizados
para definir o nível de memória de suporte que é necessário em cada nodo do
sistema distribuído.
Em sistemas distribuídos baseados em FTT, é possível implementar um
escalonamento holístico centralizado, no qual se consideram as
interdependências entre tarefas produtoras/consumidoras e mensagens. O
conjunto de restrições que garante a realização do sistema pode ser derivado
dos parâmetros das tarefas e das mensagens, tais como os períodos e os
tempos de execução/transmissão. Nesta tese, são estudadas duas
perspectivas, uma perspectiva centrada na rede, i.e. em que o escalonamento
de mensagens é feito antes do escalonamento de tarefas, e outra perspectiva
centrada no nodo.
Um mecanismo simples de despacho de tarefas e de mensagens para
sistemas distribuídos baseados em CAN é também proposto neste trabalho.
Este mecanismo estende o já existente em FTT para despacho de mensagens.
O estudo da implementação deste mecanismo nos nodos deu origem à
especificação de um núcleo de sistema operativo. Procurou-se que este
introduzisse uma sobrecarga mínima de modo a poder ser incluído em nodos
de baixo poder computacional.
Neste trabalho, é apresentado um simulador, SimHol, para prever o
cumprimento temporal da transmissão de mensagens e da execução das
tarefas num sistema distribuído. As entradas para o simulador são os
chamados fluxos de dados, que incluem as tarefas produtoras, as mensagens
correspondentes e as tarefas que utilizam os dados transmitidos. Utilizando o
tempo de execução no pior caso e o tempo de transmissão, o simulador é
capaz de verificar se os limites temporais são cumpridos em cada nodo do
sistema e na rede.In distributed systems the communication paradigm used for intertask
interaction is the message exchange. Several approaches have been proposed
that allow the specification of the data flow between tasks, but in real-time
systems a more accurate definition of these data flows is mandatory. Namely,
the specification of the required tasks’ and messages’ parameters and the
derivation of the unspecified parameters have to be possible. Such an
approach could allow an automatic scheduling and dispatching of tasks and
messages or, at least, could reduce the number of iterations during the
system’s design. The data streams present a possible approach to the holistic
scheduling and dispatching in real-time distributed systems where different
types of analysis that correlate the various parameters are done. The results
can be used to define the level of buffering that is required at each node of the
distributed system.
In FTT-based distributed systems it is possible to implement a centralized
holistic scheduling, taking into consideration the interdependences between
producer/consumer tasks and messages. A set of constraints that guarantee
the system feasibility can then be derived from tasks and messages’
parameters such as the periods and execution/transmission times. In this thesis
the net-centric perspective, i.e., the one in which the scheduling of messages is
done prior to the scheduling of tasks, and the node-centric perspectives are
studied.
A simple mechanism to dispatch tasks and messages for CAN-based
distributed systems is also proposed in this work. This mechanism extends the
one that exists in the FTT for the dispatching of messages. The study of the
implementation of this mechanism in the nodes gave birth to the specification of
a kernel. A goal for this kernel was to achieve a low overhead so that it could
be included in nodes with low processing power.
In this work a simulator to preview the timeliness of the transmission of
messages and of the execution of tasks in a distributed system is presented.
The inputs to the simulator are the so-called data streams, which include the
producer tasks, the correspondent messages and the tasks that use the
transmitted data. Using the worst-case execution time and transmission time,
the simulator is able to verify if deadlines are fulfilled in every node of the
system and in the network.Escola Superior de Tecnologia de Castelo BrancoPRODEP III, eixo 3, medida 5, acção 5.3FCTSAPIENS99 - POSI/SRI/34244/99IEETA da Universidade de AveiroARTIST - European Union Advanced Real Time System
PROPOSED MIDDLEWARE SOLUTION FOR RESOURCE-CONSTRAINED DISTRIBUTED EMBEDDED NETWORKS
The explosion in processing power of embedded systems has enabled distributed embedded networks to perform more complicated tasks. Middleware are sets of encapsulations of common and network/operating system-specific functionality into generic, reusable frameworks to manage such distributed networks. This thesis will survey and categorize popular middleware implementations into three adapted layers: host-infrastructure, distribution, and common services. This thesis will then apply a quantitative approach to grading and proposing a single middleware solution from all layers for two target platforms: CubeSats and autonomous unmanned aerial vehicles (UAVs). CubeSats are 10x10x10cm nanosatellites that are popular university-level space missions, and impose power and volume constraints. Autonomous UAVs are similarly-popular hobbyist-level vehicles that exhibit similar power and volume constraints. The MAVLink middleware from the host-infrastructure layer is proposed as the middleware to manage the distributed embedded networks powering these platforms in future projects. Finally, this thesis presents a performance analysis on MAVLink managing the ARM Cortex-M 32-bit processors that power the target platforms
Preliminary Specification of Basic Services and Protocols
The objective of D5 is to provide a preliminary definition of basic services and protocols that will be necessary to program CORTEX applications made of sentient objects. Furthermore, the aim of D5 is also to provide an architectural view of the possible composition of services and relations among them. In this view, some services are intended to facilitate communication with certain required properties, others are fundamentally event-oriented services, providing extra functionality at a middleware level and, finally, the remaining services are essentially supporting services, which can be used by event and communication services, as well as directly by applications. More specifically, in terms of event and communication services the deliverable describes a content and cell based predictive routing protocol to provide predictability in mobile ad hoc environments as envisaged in CORTEX, it specifies the messages used by the TBMAC protocol and studies the inaccessibility of the latter, it specifies an event service that implements anonymous communication based on the publish-subscribe paradigm, it describes the deployment of event-channels on a CAN-bus network and, finally, it provides a preliminary specification of the interface of an adaptable timed event service (ATES). In terms of supporting services, the deliverable describes protocols for the implementation of all the basic services defined within the Timely Computing Base (TCB) and provides a specification of resource management services defined accordingly to a resource and task mode
Replication and fault-tolerance in real-time systems
PhD ThesisThe increased availability of sophisticated computer hardware and the corresponding
decrease in its cost has led to a widespread growth in the use of computer systems for realtime
plant and process control applications. Such applications typically place very high
demands upon computer control systems and the development of appropriate control
software for these application areas can present a number of problems not normally
encountered in other applications.
First of all, real-time applications must be correct in the time domain as well as the value
domain: returning results which are not only correct but also delivered on time. Further,
since the potential for catastrophic failures can be high in a process or plant control
environment, many real-time applications also have to meet high reliability requirements.
These requirements will typically be met by means of a combination of fault avoidance and
fault tolerance techniques.
This thesis is intended to address some of the problems encountered in the provision of fault
tolerance in real-time applications programs. Specifically,it considers the use of replication
to ensure the availability of services in real-time systems. In a real-time environment,
providing support for replicated services can introduce a number of problems. In particular,
the scope for non-deterministic behaviour in real-time applications can be quite large and
this can lead to difficultiesin maintainingconsistent internal states across the members of a
replica group. To tackle this problem, a model is proposed for fault tolerant real-time
objects which not only allows such objects to perform application specific recovery
operations and real-time processing activities such as event handling, but which also allows
objects to be replicated. The architectural support required for such replicated objects is
also discussed and, to conclude, the run-time overheads associated with the use of such
replicated services are considered.The Science and Engineering Research Council
MAFTIA Conceptual Model and Architecture
This document builds on the work reported in MAFTIA deliverable D1. It contains a refinement of the MAFTIA conceptual model and a discussion of the MAFTIA architecture. It also introduces the work done in WP6 on verification and assessment of security properties, which is reported on in more detail in MAFTIA deliverable D
Qduino: a cyber-physical programming platform for multicore Systems-on-Chip
Emerging multicore Systems-on-Chip are enabling new cyber-physical applications such as autonomous drones, driverless cars and smart manufacturing using web-connected 3D printers. Common to those applications is a communicating task pipeline, to acquire and
process sensor data and produce outputs that control actuators. As a result, these applications usually have timing requirements for both individual tasks and task pipelines formed for sensor data processing and actuation. Current cyber-physical programming platforms, such as Arduino and embedded Linux with the POSIX interface do not allow application developers to specify those timing requirements. Moreover, none of them provide the programming interface to schedule tasks and map them to processor cores, while managing I/O in a predictable manner, on multicore hardware platforms. Hence, this thesis presents the Qduino programming platform. Qduino adopts the simplicity of the Arduino API, with additional support for real-time multithreaded sketches on multicore architectures. Qduino allows application developers to specify timing properties of individual tasks as well as task pipelines at the design stage. To this end, we propose a mathematical framework to derive each task’s budget and period from the specified end-to-end timing requirements.
The second part of the thesis is motivated by the observation that at the center of these pipelines are tasks that typically require complex software support, such as sensor data fusion or image processing algorithms. These features are usually developed by many man-year engineering efforts and thus commonly seen on General-Purpose Operating Systems (GPOS). Therefore, in order to support modern, intelligent cyber-physical applications, we enhance the Qduino platform’s extensibility by taking advantage of the Quest-V virtualized partitioning kernel. The platform’s usability is demonstrated by building a novel web-connected 3D printer and a prototypical autonomous drone framework in Qduino
- …