481 research outputs found

    Modeling Mixed-critical Systems in Real-time BIP

    No full text
    International audienceThe proliferation of multi- and manycores creates an important design problem: the design and verification for mixed-criticality constraints in timing and safety, taking into account the resource sharing and hardware faults. In our work, we aim to contribute towards the solution of these problems by using a formal design language - the real time BIP, to model both hardware and software, functionality and scheduling. In this paper we present the initial experiments of modeling mixed-criticality systems in BIP

    The design and use of a digital radio telemetry system for measuring internal combustion engine piston parameters.

    Get PDF
    During the course of this project, a digital radio telemetry system has been designed and shown to be capable of measuring parameters from the piston of an internal combustion engine, under load. The impetus for the work stems from the need to sample the appropriate data required for oil degradation analysis and the unavailability of system to perform such sampling. The prototype system was designed for installation within a small Norton Villiers C-30 industrial engine. This choice of engine presented significant design challenges due to the small size of the engine (components and construction) and the crankcase environment. These challenges were manifest in the choice of carrier frequency, antenna size and location, modulation scheme, data encoding scheme, signal attenuation, error checking and correction, choice of components, manufacturing techniques and physical mounting to reciprocating parts. In order to overcome these challenges detailed analysis of the radio frequency spectrum was undertaken in order to minimise attenuation from mechanisms such as, absorption, reflection, motion, spatial arrangement and noise. Another aspect of the project concerned the development of a flexible modus operandi in order to facilitate a number of sampling regimes. In order to achieve such flexibility a two-way communication protocol was implemented enabling the sampling system to be programmed into a particular mode of operation, while in use. Additionally the system was designed to accommodate the range of signals output from most transducer devices. The sampling capabilities of the prototype system were extended by enabling the system to support multiple transducers providing a mixture of output signals; for example both analogue and digital signals have been sampled. Additionally, a facility to sample data in response to triggering stimuli has been tested; specifically a sampling trigger may be derived from the motion of the piston via an accelerometer. Ancillary components, such as interface hardware and software, have been developed which are suitable for the recording of data accessed by the system. This work has demonstrated that multi-transducer, mixed signal monitoring of piston parameters, (such as temperature, acceleration etc.) using a two-way, programmable, digital radio frequency telemetry system is not only possible but provides a means for more advanced instrumentation

    Ordonnancement des systèmes avec différents niveaux de criticité

    Get PDF
    Real-time safety-critical systems must complete their tasks within a given time limit. Failure to successfully perform their operations, or missing a deadline, can have severe consequences such as destruction of property and/or loss of life. Examples of such systems include automotive systems, drones and avionics among others. Safety guarantees must be provided before these systems can be deemed usable. This is usually done through certification performed by a certification authority.Safety evaluation and certification are complicated and costly even for smaller systems.One answer to these difficulties is the isolation of the critical functionality. Executing tasks of different criticalities on separate platforms prevents non-critical tasks from interfering with critical ones, provides a higher guaranty of safety and simplifies the certification process limiting it to only the critical functions. But this separation, in turn, introduces undesirable results portrayed by an inefficient resource utilization, an increase in the cost, weight, size and energy consumption which can put a system in a competitive disadvantage.To overcome the drawbacks of isolation, Mixed Criticality (MC) systems can be used. These systems allow functionalities with different criticalities to execute on the same platform. In 2007, Vestal proposed a model to represent MC-systems where tasks have multiple Worst Case Execution Times (WCETs), one for each criticality level. In addition, correctness conditions for scheduling policies were formally defined, allowing lower criticality jobs to miss deadlines or be even dropped in cases of failure or emergency situations.The introduction of multiple WCETs and different conditions for correctness increased the difficulty of the scheduling problem for MC-systems. Conventional scheduling policies and schedulability tests proved inadequate and the need for new algorithms arose. Since then, a lot of work has been done in this field.In this thesis, we contribute to the study of schedulability in MC-systems. The workload of a system is represented as a set of jobs that can describe the execution over the hyper-period of tasks or over a duration in time. This model allows us to study the viability of simulation-based correctness tests in MC-systems. We show that simulation tests can still be used in mixed-criticality systems, but in this case, the schedulability of the worst case scenario is no longer sufficient to guarantee the schedulability of the system even for the fixed priority scheduling case. We show that scheduling policies are not predictable in general, and define the concept of weak-predictability for MC-systems. We prove that a specific class of fixed priority policies are weakly predictable and propose two simulation-based correctness tests that work for weakly-predictable policies.We also demonstrate that contrary to what was believed, testing for correctness can not be done only through a linear number of preemptions.The majority of the related work focuses on systems of two criticality levels due to the difficulty of the problem. But for automotive and airborne systems, industrial standards define four or five criticality levels, which motivated us to propose a scheduling algorithm that schedules mixed-criticality systems with theoretically any number of criticality levels. We show experimentally that it has higher success rates compared to the state of the art.We illustrate how our scheduling algorithm, or any algorithm that generates a single time-triggered table for each criticality mode, can be used as a recovery strategy to ensure the safety of the system in case of certain failures.Finally, we propose a high level concurrency language and a model for designing an MC-system with coarse grained multi-core interference.Les systèmes temps-réel critiques doivent exécuter leurs tâches dans les délais impartis. En cas de défaillance, des événements peuvent avoir des catastrophes économiques. Des classifications des défaillances par rapport aux niveaux des risques encourus ont été établies, en particulier dans les domaines des transports aéronautique et automobile. Des niveaux de criticité sont attribués aux différentes fonctions des systèmes suivant les risques encourus lors d'une défaillance et des probabilités d'apparition de celles-ci. Ces différents niveaux de criticité influencent les choix d'architecture logicielle et matérielle ainsi que le type de composants utilisés pour sa réalisation. Les systèmes temps-réels modernes ont tendance à intégrer sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticité. Cette intégration est nécessaire pour des systèmes modernes comme par exemple les drones (UAV) afin de réduire le coût, le poids et la consommation d'énergie. Malheureusement, elle conduit à des difficultés importantes lors de leurs conceptions. En plus, ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités.Il est bien connu que le problème d'ordonnancement des systèmes avec différents niveaux de criticités représente un des plus grand défi dans le domaine de systèmes temps-réel. Les techniques traditionnelles proposent comme solution l’isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Malheureusement, une telle solution conduit à une mauvaise des ressources et à la perte de l’avantage de cette intégration. En 2007, Vestal a proposé un modèle pour représenter les systèmes avec différents niveaux de criticité dont les tâches ont plusieurs temps d’exécution, un pour chaque niveau de criticité. En outre, les conditions de validité des stratégies d’ordonnancement ont été définies de manière formelle, permettant ainsi aux tâches les moins critiques d’échapper aux délais, voire d’être abandonnées en cas de défaillance ou de situation d’urgence.Les politiques de planification conventionnelles et les tests d’ordonnoncement se sont révélés inadéquats.Dans cette thèse, nous contribuons à l’étude de l’ordonnancement dans les systèmes avec différents niveaux de criticité. La surcharge d'un système est représentée sous la forme d'un ensemble de tâches pouvant décrire l'exécution sur l'hyper-période de tâches ou sur une durée donnée. Ce modèle nous permet d’étudier la viabilité des tests de correction basés sur la simulation pour les systèmes avec différents niveaux de criticité. Nous montrons que les tests de simulation peuvent toujours être utilisés pour ces systèmes, et la possibilité de l’ordonnancement du pire des scénarios ne suffit plus, même pour le cas de l’ordonnancement avec priorité fixe. Nous montrons que les politiques d'ordonnancement ne sont généralement pas prévisibles. Nous définissons le concept de faible prévisibilité pour les systèmes avec différents niveaux de criticité et nous montrons ensuite qu'une classe spécifique de stratégies à priorité fixe sont faiblement prévisibles. Nous proposons deux tests de correction basés sur la simulation qui fonctionnent pour des stratégies faiblement prévisibles.Nous montrons également que, contrairement à ce que l’on croyait, le contrôle de l’exactitude ne peut se faire que par l’intermédiaire d’un nombre linéaire de préemptions.La majorité des travaux reliés à notre domaine portent sur des systèmes à deux niveaux de criticité en raison de la difficulté du problème. Mais pour les systèmes automobiles et aériens, les normes industrielles définissent quatre ou cinq niveaux de criticité, ce qui nous a motivés à proposer un algorithme de planification qui planifie les systèmes à criticité mixte avec théoriquement un nombre quelconque de niveaux de criticité. Nous montrons expérimentalement que le taux de réussite est supérieur à celui de l’état de la technique

    A 4-channel 12-bit high-voltage radiation-hardened digital-to-analog converter for low orbit satellite applications

    Get PDF
    This paper presents a circuit design and an implementation of a four-channel 12-bit digital-to-analog converter (DAC) with high-voltage operation and radiation-tolerant attribute using a specific CSMC H8312 0.5-μm Bi-CMOS technology to achieve the functionality across a wide-temperature range from -55 °C to 125 °C. In this paper, an R-2R resistor network is adopted in the DAC to provide necessary resistors matching which improves the DAC precision and linearity with both the global common centroid and local common centroid layout. Therefore, no additional, complicated digital calibration or laser-trimming are needed in this design. The experimental and measurement results show that the maximum frequency of the single-chip four-channel 12-bit R-2R ladder high-voltage radiation-tolerant DAC is 100 kHz, and the designed DAC achieves the maximum value of differential non-linearity of 0.18 LSB, and the maximum value of integral non-linearity of -0.53 LSB at 125 °C, which is close to the optimal DAC performance. The performance of the proposed DAC keeps constant over the whole temperature range from -55 °C to 125 °C. Furthermore, an enhanced radiation-hardened design has been demonstrated by utilizing a radiation chamber experimental setup. The fabricated radiation-tolerant DAC chipset occupies a die area of 7 mm x 7 mm in total including pads (core active area of 4 mm x 5 mm excluding pads) and consumes less than 525 mW, output voltage ranges from -10 to +10 V

    Design choices for agent-based control of AGVs in the dough making process

    Get PDF
    In this paper we consider a multi-agent system (MAS) for the logistics control of Automatic Guided Vehicles (AGVs) that are used in the dough making process at an industrial bakery. Here, logistics control refers to constructing robust schedules for all transportation jobs. The paper discusses how alternative MAS designs can be developed and compared using cost, frequency of messages between agents, and computation time for evaluating control rules as performance indicators. Qualitative design guidelines turn out to be insufficient to select the best agent architecture. Therefore, we also use simulation to support decision making, where we use real-life data from the bakery to evaluate several alternative designs. We find that architectures in which line agents initiate allocation of transportation jobs, and AGV agents schedule multiple jobs in advance, perform best. We conclude by discussing the benefits of our MAS systems design approach for real-life applications

    Hardware Software Synthesis of a H.264 / AVC Baseline Profile Decoder

    Get PDF
    The latest video compression standard is a joint effort between ITU and MPEG known as H.264/AVC. As with any video compression standard the H.264/AVC uses computationally intensive algorithms to maximize performance. During decompression these algorithms must be applied in real-time, processing 30 frames a second. This can be done in software, specialized hardware, or a combination of the two. Software solutions allow for maximum portability and ease of design, but General Purpose Processors (GPP) can not take full advantage of the parallelizable algorithms that the H.264 decoder is based upon. Specialized hardware solutions, on the other hand, allow concurrent data and instruction paths, but do not offer a high level of abstraction for cross platform development. Recent work by Xilinx has resulted in the advent of the MicroBlaze soft-processor that is a stand alone microcontroller built from an FPGA. The MicroBlaze provides a specialized hardware medium to run software on-chip with VHDL entities. The goal of this thesis was to model and simulate a software hardware hybrid H.264/AVC Baseline Profile decoder using VHDL and a soft-processor. It was proposed to divide all highly sequential calculations (run-length and CALVC decoding) and control data flow into software and perform the remaining calculations (prediction, inverse transform, inverse quantization, etc.) in hardware modules. The software runs on Xilinx\u27 s MicroBlaze soft-processor and the hardware was designed using VHDL. A major advantage of soft-processors over GPP\u27s, is that it hardware instantiations reside on-chip with the processor. The software and MicroBlaze soft-processor were simulated in a test bench and the results proved that the MicroBlaze could not handle the encoded bit-stream in real-time. For this reason the hardware interface and hardware decoder were never fully implemented. The scope of the thesis covers the H.264 Baseline Profile standard, MicroBlaze processor, the implemented software solution, and the proposed hardware counterpart

    Automated Fault Tolerance Augmentation in Model-Driven Engineering for CPS

    Get PDF
    Cyber-Physical Systems are usually subject to dependability requirements such as safety and reliability constraints. Over the last 50 years, a body of efficient fault-tolerance mechanisms has been devised to handle faults occurring at run-time. However, properly implementing those mechanisms is a time-consuming task that requires a great deal of know-how. In this paper, we propose a general framework which allows system designers to decouple functional and non-functional concerns, and express non- functional properties at design time using domain-specific languages. In the spirit of generative programming, functional models are then automatically “augmented” with dependability mechanisms. Importantly, the real-time behavior of the initial models in terms of sampling times and meeting deadlines is preserved. The practicality of the approach is demonstrated with the automated implementation of one prominent software fault-tolerance pattern, namely N-Version Programming, in the CPAL model-driven engineering workflow

    Cache design strategies for efficient adaptive line placement

    Get PDF
    Efficient memory hierarchy design is critical due to the large difference between the speed of the processors and the memory. In this context, cache memories play a crucial role bridging this gap. Cache management has become even more significant due to the appearance of chip multiprocessors (CMPs), which imply larger memory bandwidth requirements and greater working sets of many emerging applications, and which also need a fair and efficient distribution of cache resources between the cores in a single chip. This dissertation aims to analyze some of the problems commonly found in modern caches and to propose cost-effective solutions to improve their performance. Most of the approaches proposed in this Thesis reduce cache miss rates by taking advantage of the different levels of demand cache sets may experience. This way, lines are placed in underutilized cache blocks of other cache sets if they are likely to be reused in the near future and there is no enough space in their native cache set. When this does not suffice, this dissertation proposes to modify in a coordinated way the insertion policies of oversubscribed sets. Hence, our proposals retain the most useful part of the working set in the cache while discarding temporary data as soon as possible. These ideas, initially developed in the context of last-level caches (LLCs) in single core systems, are successfully adapted in this Thesis to first-level caches and multicore systems. Regarding first-level caches, a novel design that allows to dynamically allocate banks to the instruction or the data cache depending on their degree of pressure is presented. As for multicore systems, our designs are firstly provided with thread-awareness in shared caches in order to give a particular treatment to each stream of requests depending on its owner. Finally, we explore the sharing of resources by means of the spilling of lines among private LLCs in CMPs using several innovative features such as a neutral state, which prevents caches from taking part in the spilling mechanism if this could be harmful, variable granularities for the management of the caches, or the coordinated management of the cache insertion policy. Throughout this process we have used a simple and cost-effective metric to track the state of each cache set called Set Saturation Level (SSL). Finally, it is worthy to point out that our approaches are very competitive and often outperform many of the most recent techniques in the field, despite they imply really small storage and power consumption overheads
    • …
    corecore