50 research outputs found

    Deployment and Debugging of Real-Time Applications on Multicore Architectures

    Get PDF
    It is essential to enable information extraction from software. Program tracing techniques are an example of information extraction. Program tracing extracts information from the program during execution. Tracing helps with the testing and validation of software to ensure that the software under test is correct. Information extraction is done by instrumenting the program. Logged information can be stored in dedicated logging memories or can be buffered and streamed off-chip to an external monitor. The designer inspects the trace after execution to identify potentially erroneous state information. In addition, the trace can provide the state information that serves as input to generate the erroneous output for reproducibility. Information extraction can be difficult and expensive due to the increase in size and complexity of modern software systems. For the sub-class of software systems known as real-time systems, these issues are further aggravated. This is because real-time systems demand timing guarantees in addition to functional correctness. Consequently, any instrumentation to the original program code for the purpose of information extraction may affect the temporal behaviors of the program. This perturbation of temporal behaviors can lead to the violation of timing constraints, which may bias the program execution and/or cause the program to miss its deadline. As a result, there is considerable interest in devising techniques to allow for information extraction without missing a program’s deadline that is known as time-aware instrumentation. This thesis investigates time-aware instrumentation mechanisms to instrument programs while respecting their timing constraints and functional behavior. Knowledge of the underlying hardware on which the software runs, enables the extraction of more information via the instrumentation process. Chip-multiprocessors offer a solution to the performance bottleneck on uni-processors. Providing timing guarantees for hard real-time systems, however, on chip-multiprocessors is difficult. This is because conventional communication interconnects are designed to optimize the average-case performance. Therefore, researchers propose interconnects such as the priority-aware networks to satisfy the requirements of hard real-time systems. The priority-aware interconnects, however, lack the proper analysis techniques to facilitate the deployment of real-time systems. This thesis also investigates latency and buffer space analysis techniques for pipelined communication resource models, as well as algorithms for the proper deployment of real-time applications to these platforms. The analysis techniques proposed in this thesis provide guarantees on the schedulability of real-time systems on chip-multiprocessors. These guarantees are based on reducing contention in the interconnect while simultaneously accurately computing the worst-case communication latencies. While these worst-case latencies provide bounds for computing the overall worst-case execution time of applications on chip-multiprocessors, they also provide means to assigning instrumentation budgets required by time-aware instrumentation. Leveraging these platform-specific analysis techniques for the assignment of instrumentation budgets, allows for extracting more information from the instrumentation process

    Dynamic contracts for verification and enforcement of real-time systems properties

    Get PDF
    Programa de Doutoramento em Informática (MAP-i) das Universidades do Minho, de Aveiro e do PortoRuntime veri cation is an emerging discipline that investigates methods and tools to enable the veri cation of program properties during the execution of the application. The goal is to complement static analysis approaches, in particular when static veri cation leads to the explosion of states. Non-functional properties, such as the ones present in real-time systems are an ideal target for this kind of veri cation methodology, as are usually out of the range of the power and expressiveness of classic static analyses. Current real-time embedded systems development frameworks lack support for the veri - cation of properties using explicit time where counting time (i.e., durations) may play an important role in the development process. Temporal logics targeting real-time systems are traditionally undecidable. Based on a restricted fragment of Metric temporal logic with durations (MTL-R), we present the proposed synthesis mechanisms 1) for target systems as runtime monitors and 2) for SMT solvers as a way to get, respectively, a verdict at runtime and a schedulability problem to be solved before execution. The later is able to solve partially the schedulability analysis for periodic resource models and xed priority scheduler algorithms. A domain speci c language is also proposed in order to describe such schedulability analysis problems in a more high level way. Finally, we validate both approaches, the rst using empirical scheduling scenarios for unimulti- processor settings, and the second using the use case of the lightweight autopilot system Px4/Ardupilot widely used for industrial and entertainment purposes. The former also shows that certain classes of real-time scheduling problems can be solved, even though without scaling well. The later shows that for the cases where the former cannot be used, the proposed synthesis technique for monitors is well applicable in a real world scenario such as an embedded autopilot ight stack.A verificação do tempo de execução e uma disciplina emergente que investiga métodos e ferramentas para permitir a verificação de propriedades do programa durante a execução da aplicação. O objetivo é complementar abordagens de analise estática, em particular quando a verificação estática se traduz em explosão de estados. As propriedades não funcionais, como as que estão presentes em sistemas em tempo real, são um alvo ideal para este tipo de metodologia de verificação, como geralmente estão fora do alcance do poder e expressividade das análises estáticas clássicas. As atuais estruturas de desenvolvimento de sistemas embebidos para tempo real não possuem suporte para a verificação de propriedades usando o tempo explicito onde a contagem de tempo (ou seja, durações) pode desempenhar um papel importante no processo de desenvolvimento. As logicas temporais que visam sistemas de tempo real são tradicionalmente indecidíveis. Com base num fragmento restrito de MTL-R (metric temporal logic with durations), apresentaremos os mecanismos de síntese 1) para sistemas alvo como monitores de tempo de execução e 2) para solvers SMT como forma de obter, respetivamente, um veredicto em tempo de execução e um problema de escalonamento para ser resolvido antes da execução. O ultimo é capaz de resolver parcialmente a analise de escalonamento para modelos de recursos periódicos e ainda para algoritmos de escalonamento de prioridade fixa. Propomos também uma linguagem especifica de domínio para descrever esses mesmos problemas de analise de escalonamento de forma mais geral e sucinta. Finalmente, validamos ambas as abordagens, a primeira usando cenários de escalonamento empírico para sistemas uni- multi-processador e a segunda usando o caso de uso do sistema de piloto automático leve Px4/Ardupilot amplamente utilizado para fins industriais e de entretenimento. O primeiro mostra que certas classes de problemas de escalonamento em tempo real podem ser solucionadas, embora não seja escalável. O ultimo mostra que, para os casos em que a primeira opção não possa ser usada, a técnica de síntese proposta para monitores aplica-se num cenário real, como uma pilha de voo de um piloto automático embebido.This thesis was partially supported by National Funds through FCT/MEC (Portuguese Foundation for Science and Technology) and co- nanced by ERDF (European Regional Development Fund) under the PT2020 Partnership, within the CISTER Research Unit (CEC/04234); FCOMP-01-0124-FEDER-015006 (VIPCORE) and FCOMP-01-0124-FEDER- 020486 (AVIACC); also by FCT and EU ARTEMIS JU, within project ARTEMIS/0003/2012, JU grant nr. 333053 (CONCERTO); and by FCT/MEC and the EU ARTEMIS JU within project ARTEMIS/0001/2013 - JU grant nr. 621429 (EMC2)

    Adaptive Performance and Power Management in Distributed Computing Systems

    Get PDF
    The complexity of distributed computing systems has raised two unprecedented challenges for system management. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, system power consumption must be controlled in order to avoid system failures caused by power capacity overload or system overheating due to increasingly high server density. However, most existing work, unfortunately, either relies on open-loop estimations based on off-line profiled system models, or evolves in a more ad hoc fashion, which requires exhaustive iterations of tuning and testing, or oversimplifies the problem by ignoring the coupling between different system characteristics (\ie, response time and throughput, power consumption of different servers). As a result, the majority of previous work lacks rigorous guarantees on the performance and power consumption for computing systems, and may result in degraded overall system performance. In this thesis, we extensively study adaptive performance/power management and power-efficient performance management for distributed computing systems such as information dissemination systems, power grid management systems, and data centers, by proposing Multiple-Input-Multiple-Output (MIMO) control and hierarchical designs based on feedback control theory. For adaptive performance management, we design an integrated solution that controls both the average response time and CPU utilization in information dissemination systems to achieve bounded response time for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design a hierarchical control solution to guarantee the deadlines of real-time tasks in power grid computing by grouping them based on their characteristics, respectively. For adaptive power management, we design MIMO optimal control solutions for power control at the cluster and server level and a hierarchical solution for large-scale data centers. Our MIMO control design can capture the coupling among different system characteristics, while our hierarchical design can coordinate controllers at different levels. For power-efficient performance management, we discuss a two-layer coordinated management solution for virtualized data centers. Experimental results in both physical testbeds and simulations demonstrate that all the solutions outperform state-of-the-art management schemes by significantly improving overall system performance

    WCET and Priority Assignment Analysis of Real-Time Systems using Search and Machine Learning

    Get PDF
    Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours

    Foundations for Safety-Critical on-Demand Medical Systems

    Get PDF
    In current medical practice, therapy is delivered in critical care environments (e.g., the ICU) by clinicians who manually coordinate sets of medical devices: The clinicians will monitor patient vital signs and then reconfigure devices (e.g., infusion pumps) as is needed. Unfortunately, the current state of practice is both burdensome on clinicians and error prone. Recently, clinicians have been speculating whether medical devices supporting ``plug & play interoperability\u27\u27 would make it easier to automate current medical workflows and thereby reduce medical errors, reduce costs, and reduce the burden on overworked clinicians. This type of plug & play interoperability would allow clinicians to attach devices to a local network and then run software applications to create a new medical system ``on-demand\u27\u27 which automates clinical workflows by automatically coordinating those devices via the network. Plug & play devices would let the clinicians build new medical systems compositionally. Unfortunately, safety is not considered a compositional property in general. For example, two independently ``safe\u27\u27 devices may interact in unsafe ways. Indeed, even the definition of ``safe\u27\u27 may differ between two device types. In this dissertation we propose a framework and define some conditions that permit reasoning about the safety of plug & play medical systems. The framework includes a logical formalism that permits formal reasoning about the safety of many device combinations at once, as well as a platform that actively prevents unintended timing interactions between devices or applications via a shared resource such as a network or CPU. We describe the various pieces of the framework, report some experimental results, and show how the pieces work together to enable the safety assessment of plug & play medical systems via a two case-studies

    Scheduling and Allocation in Multiprocessor Systems

    Get PDF
    The problem of allocation has always been one of the fundamental issues of building applications in multiprocessor systems. For real-time applications, the allocation problem should directly address the issues of task and communication scheduling. In this context, the allocation of tasks has to fully utilize the available processors and the scheduling of tasks has to meet the specified timing constraints. Clearly, the execution of tasks under the allocation and schedule has to satisfy the precedence, resources, and synchronization constraints. Traditionally time constraints for real-time tasks have been specified in terms of ready time and deadlines. Many application tasks have relative timing constraints in which the constraints for the execution of a task are defined in terms of the actual execution instances of prior tasks. In this dissertation we consider the allocation and scheduling problem of the periodic tasks with relative timing requirements. We take a time-based scheduling approach to generate a multiprocessor schedule for a set of periodic tasks. A simulated annealing algorithm is developed as the overall search algorithm for a feasible solution. Our results show that the algorithm performs well and finds feasible allocation and scheduling. We also investigate how to exploit the replication technique to increase the schedulability and performance of the systems. In this dissertation, we adopt the computation model in which each task may have more than one copy and a task may start its execution after receiving necessary data from a copy of each of its predecessors. Based on this model, replication techniques are developed to increase the schedulability of the applications in real-time systems and to reduce the execution cost of the applications in non-real-time systems

    Parametric Polynomial-Time Algorithms for Computing Response-Time Bounds for Static-Priority Tasks with Release Jitters

    No full text
    corecore