14 research outputs found

    Advancing mixed criticality scheduling techniques to support industrial applications

    Get PDF
    Safety critical software development is an extremely costly endeavour; software developers must forever target efficient processes that reduce software cost, while allowing significant increases in system size. The key challenge being how to reduce software cost, without compromising safety or quality. The focus of this thesis is to research the development and temporal proof of a mixed criticality system. The thesis, which attempts to define an end to end process, begins by studying appropriate and efficient methods for assessing the timing performance of system components. The key being an approach that can be applied automatically at an early point in the design lifecycle. The thesis then progresses to study how existing mixed criticality research needs to be advanced and matured in order to support an industrial safety critical application. This includes the definition of a scheduling model designed to provide the necessary protections advised by international aviation guidelines. In the final part of this thesis the timing process and mixed criticality system model are brought together to explore how a real system using these techniques could be validated

    Scheduling and locking in multiprocessor real-time operating systems

    Get PDF
    With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due to priority inversion can be bounded a priori. Several spinlock- and semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspension-oblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five asymptotically optimal locking protocols are designed and analyzed: a family of mutex, RW, and k-exclusion protocols for global, partitioned, and clustered scheduling that are asymptotically optimal in the suspension-oblivious case, and a mutex protocol for partitioned scheduling that is asymptotically optimal in the suspension-aware case. A LITMUSRT-based empirical evaluation is presented that shows these protocols to be practical

    Reengineering and development of IoT Systems for Home Automation

    Get PDF
    BEng Thesis, Instituto Superior de Engenharia do Porto.With the increasing adoption of technology in today’s houses, electricity is at an all-time high demand. In fact, given the plethora of vital electricity-powered appliances used every day, such as refrigerators, washing machines, and so forth, it has been proven difficult to even handle all devices’ electric consumption. To reduce consumption costs and turn it into a more manageable process, the concept of flex-offers was created. A flex-offer is built around scheduling energy usage in conjunction with the prices of electricity, as provided by an energy market. More specifically, a flex-offer is an energy consumption offer containing the user’s energy consumption flexibility, which is sent to an entity called the Aggregator, who aggregates together flex-offers from multiple parties, bargains with the energy market, and responds to each flex-offer with a schedule that meets the lowest prices for consumption, while still satisfying the users’ needs. By using flex-offers on a house’s equipment, the idea of FlexHousing was born. The aspired goal of the CISTER Research Center’s FlexHousing project is to deliver a platform where users can register their smart appliances, regardless of its brand and distributor, set up preferences for the devices’ usage, and let the system manage the energy consumption and device activation schedules based on the energy market prices. A previous project had already built a prototype of the FlexHousing system. Nevertheless, the original platform had many limitations and lacked maturity from a software engineering point of view, and the goal of this internship is to apply a reengineering process on the FlexHousing project, while also adding new features to it. Thus, the project’s domain model, its database, and class structures were altered to satisfy the new requirements. Furthermore, its web platform was rebuilt from the ground up. Also, a new interface was developed to facilitate support for devices of different brands. As a proof of concept for the benefits provided by this new interface, a connection with a new device (Sonoff Pow) was also established. Moreover, a new functionality was developed to identify a device’s type of appliance based on its energy consumption, in other words, to specify if a device is, for instance, a refrigerator or not. Finally, another new feature was added in which, based on a device’s type and its energy consumption pattern, the flex-offer creation is automated, minimizing user input. As planned, the FlexHousing platform now supports multiple types of devices, and has a software interface to support more types in the future with minimal effort. The flex-offer creation process has been simplified and is now partially automated. Finally, the web platform’s UI has been updated, becoming more intuitive and appealing to the user.info:eu-repo/semantics/publishedVersio

    gbeta - a Language with Virtual Attributes, Block Structure, and Propagating, Dynamic Inheritance

    Get PDF
    A language design development process is presented which leads to a language, gbeta, with a tight integration of virtual classes, general block structure, and a multiple inheritance mechanism based on coarse-grained structural type equivalence. From this emerges the concept of propagating specialization. The power lies in the fact that a simple expression can have far-reaching but well-organized consequences, e.g., in one step causing the combination of families of classes, then by propagation the members of those families, and finally by propagation the methods of the members. Moreover, classes are first class values which can be constructed at run-time, and it is possible to inherit from classes whether or not they are compile-time constants, and whether or not they were created dynamically. It is also possible to change the class and structure of an existing object at run-time, preserving object identity. Even though such dynamism is normally not seen in statically type-checked languages, these constructs have been integrated without compromising the static type safety of the language

    Machine virtuelle universelle pour codage vidéo reconfigurable

    Get PDF
    Cette thèse propose un nouveau paradigme de représentation d applications pour les machines virtuelles, capable d abstraire l architecture des systèmes informatiques. Les machines virtuelles actuelles reposent sur un modèle unique de représentation d application qui abstrait les instructions des machines et sur un modèle d exécution qui traduit le fonctionnement de ces instructions vers les machines cibles. S ils sont capables de rendre les applications portables sur une vaste gamme de systèmes, ces deux modèles ne permettent pas en revanche d exprimer la concurrence sur les instructions. Or, celle-ci est indispensable pour optimiser le traitement des applications selon les ressources disponibles de la plate-forme cible. Nous avons tout d abord développé une représentation universelle d applications pour machine virtuelle fondée sur la modélisation par graphe flux de données. Une application est ainsi modélisée par un graphe orienté dont les sommets sont des unités de calcul (les acteurs) et dont les arcs représentent le flux de données passant au travers de ces sommets. Chaque unité de calcul peut être traitée indépendamment des autres sur des ressources distinctes. La concurrence sur les instructions dans l application est alors explicite. Exploiter ce nouveau formalisme de description d'applications nécessite de modifier les règles de programmation. A cette fin, nous avons introduit et défini le concept de Représentation Canonique et Minimale d acteur. Il se fonde à la fois sur le langage de programmation orienté acteur CAL et sur les modèles d abstraction d instructions des machines virtuelles existantes. Notre contribution majeure qui intègre les deux nouvelles représentations proposées, est le développement d une Machine Virtuelle Universelle (MVU) dont la spécificité est de gérer les mécanismes d adaptation, d optimisation et d ordonnancement à partir de l infrastructure de compilation Low-Level Virtual Machine. La pertinence de cette MVU est démontrée dans le contexte normatif du codage vidéo reconfigurable (RVC). En effet, MPEG RVC fournit des applications de référence de décodeurs conformes à la norme MPEG-4 partie 2 Simple Profile sous la forme de graphe flux de données. L une des applications de cette thèse est la modélisation par graphe flux de données d un décodeur conforme à la norme MPEG-4 partie 10 Constrained Baseline Profile qui est deux fois plus complexe que les applications de référence MPEG RVC. Les résultats expérimentaux montrent un gain en performance en exécution de deux pour des plates-formes dotées de deux cœurs par rapport à une exécution mono-cœur. Les optimisations développées aboutissent à un gain de 25% sur ces performances pour des temps de compilation diminués de moitié. Les travaux effectués démontrent le caractère opérationnel et universel de cette norme dont le cadre d utilisation dépasse le domaine vidéo pour s appliquer à d autres domaine de traitement du signal (3D, son, photo )This thesis proposes a new paradigm that abstracts the architecture of computer systems for representing virtual machines applications. Current applications are based on abstraction of machine s instructions and on an execution model that reflects operations of these instructions on the target machine. While these two models are efficient to make applications portable across a wide range of systems, they do not express concurrency between instructions. Expressing concurrency is yet essential to optimize processing of application as the number of processing units is increasing in computer systems. We first develop a universal representation of applications for virtual machines based on dataflow graph modeling. Thus, an application is modeled by a directed graph where vertices are computation units (the actors) and edges represent the flow of data between vertices. Each processing units can be treated apart independently on separate resources. Concurrency in the instructions is then made explicitly. Exploit this new description formalism of applications requires a change in programming rules. To that purpose, we introduce and define a Minimal and Canonical Representation of actors. It is both based on actor-oriented programming and on instructions abstraction used in existing Virtual Machines. Our major contribution, which incorporates the two new representations proposed, is the development of a Universal Virtual Machine (UVM) for managing specific mechanisms of adaptation, optimization and scheduling based on the Low-Level Virtual Machine (LLVM) infrastructure. The relevance of the MVU is demonstrated on the MPEG Reconfigurable Video Coding standard. In fact, MPEG RVC provides decoder s reference application compliant with the MPEG-4 part 2 Simple Profile in the form of dataflow graph. One application of this thesis is a new dataflow description of a decoder compliant with the MPEG-4 part 10 Constrained Baseline Profile, which is twice as complex as the reference MPEG RVC application. Experimental results show a gain in performance close to double on a two cores compare to a single core execution. Developed optimizations result in a gain on performance of 25% for compile times reduced by half. The work developed demonstrates the operational nature of this standard and offers a universal framework which exceeds the field of video domain (3D, sound, picture...)EVRY-INT (912282302) / SudocSudocFranceF

    Actes de l'Ecole d'Eté Temps Réel 2005 - ETR'2005

    Get PDF
    Pdf des actes disponible à l'URL http://etr05.loria.fr/Le programme de l'Ecole d'été Temps Réel 2005 est construit autour d'exposés de synthèse donnés par des spécialistes du monde industriel et universitaire qui permettront aux participants de l'ETR, et notamment aux doctorants, de se forger une culture scientifique dans le domaine. Cette quatrième édition est centrée autour des grands thèmes d'importance dans la conception des systèmes temps réel : Langages et techniques de description d'architectures, Validation, test et preuve par des approches déterministes et stochastiques, Ordonnancement et systèmes d'exploitation temps réel, Répartition, réseaux temps réel et qualité de service

    The Role of Computers in Research and Development at Langley Research Center

    Get PDF
    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics

    A Schedulable Garbage Collection for Embedded Applications in CLI

    No full text
    corecore