31 research outputs found
Quirks and Challenges in the Design and Verification of Efficient, High-Load Real-Time Software Systems
International audienceExisting concepts for ensuring the correctness of the timing behavior of real-time systems are often based on schedulability analysis methods using exact proofs. Due to the complexity of the scheduling problem, today typically worst case approximations are used to judge the reliability of the timing behavior in software systems. In industrial practice, however, this leads to large safety margins in the design of products which are commercially unacceptable in many application domains. For such highly-efficient systems, schedulability analysis methods that are too pessimistic are of limited benefit. As a consequence, penetration of real-time analysis is suboptimal in the industrial software development, which possibly leads to insufficient quality of the developed products. Therefore, new approaches are needed to support the design and validation of high-load real-time systems with an average CPU load of 90% or above to improve the situation
Digital transformation of education & training in photonical measurement engineering & quality assurance (PMQ)
Abstract Aim of the paper is the promotion of the worldwide new situation that - the contactless acquisition of measured quantities becomes more convenient + reliable + affordable (c+r+a) by photonical sensor systems, - the mobile processing of measured values becomes more c+r+a by mobile microcomputers for example smartphones, - the substantiation of quality assurance on objective acquired (measured) data becomes more c+r+a by digital transformation, - the unified measurements of physical, physiological and chemo-analytical measurement quantities become more c+r+a by photonical measurement sensor systems with filters on CMOS chips to determine for example • geo-physical quantities (points, lengthes, areas and spaces of solids which nowadays can be acquired contactless with laser beams), • medico-physiological quantities (color sensitivities of test persons in accordance with standardized eye color sensitivity curves based on normally 7 Mio red-green-blue eye cuppings which are identifying up to 600.000 color shades) and • chemo-analytical quantities that means compositions of solids, fluids and gases which are measurable with spectrometers or nowadays with hyperspectral cameras
Deterministic Execution Sequence in Component Based Multi-Contributor Powertrain Control Systems
International audienceModern complex control applications, e.g. engine management systems, typically are built using a component based architecture, enabling the reuse of components and allowing to manage the complexity of the application in terms of functional content, size and interfaces. This approach of independently developed components is supported by the concepts available in AUTOSAR and therefore can be expected to gain increasing importance. However, due to the nature of the task of control applications there still is a strong coupling between individual parts of the components resulting in signal chains and consequently in sequencing requirements. The challenge to get such execution sequences implemented correctly is increased, as often the components are delivered by different and external parties. Our approach extends the idea of functional partitioning of the application into the time domain by defining a system of phases with a fixed sequence and a defined content. This allows to design components right from the beginning into this sequencing frame like they are designed today into the component partitioning frame and to define a system sequencing across different suppliers
Regular and chaotic current oscillations in n-type GaAs in transverse and longitudinal magnetic fields
Self-sustained current oscillations due to impurity breakdown in n-type GaAs epitaxial layers are reported for various magnetic-field strengths and orientations of the field with respect to the direction of current. In all cases relaxation oscillations at the onset of breakdown and formation of a current filament in the postbreakdown regime are observed. A magnetic field normal to the epitaxial layer destabilizes the filament and causes multifrequency oscillations and chaotic fluctuations. On the other hand, if the magnetic field lies in the plane of the layer the filamentary current flow is stable up to field strengths of 2 T
Recommended from our members
State Space and Input Output Approaches to Optimal Process Design
This dissertation examines input output approaches, including analytical and CFD models, aimed at reactor performance assessment for optimal process design. While analytical models can rapidly provide reactor output metrics by forgoing levels of complexity, they are often limited by many underlying assumptions. Conversely, slow-to-solve CFD models are more versatile in their application due to a reduced number of necessary simplifications. To demonstrate the applicability of both approaches in this work, first, an analytical method has been used to accurately calculate outlet concentrations for laminar flow reactors undergoing a steady-state process. Subsequently, detailed CFD modeling approaches are employed to optimize complex adsorption and membrane enhanced reactive processes. Chapter one introduces an analytical model capable of calculating the reactor effluent concentrations for steady-state, non-isothermal Segregated Laminar Flow Reactors (SLFRs) regardless of the reactor geometry. The only requirement is knowledge of the SLFRs residence time density function. The existing analytical model for isothermal SLFRs is extended to also be applicable non-isothermal SLFRs for incompressible fluids. The accuracy of both the existing isothermal and the developed non-isothermal SLFR models are demonstrated in four case studies with two different reactor geometries by comparing the analytical results with results obtained from equivalent CFD simulations. The analytical results are shown to be within 2 % of the CFD generated values, rendering a CFD approach superfluous for SLFRs. Chapter two introduces a novel model of a Partial Pressure Temperature Swing Adsorptive Reactor (PPTSAR) process with application to the water gas shift reaction. In this Sorption Enhanced Water Gas Shift (SEWGS) process, a set of two PPTSARs, filled with both catalyst and adsorbent, are used. Alternating between two PPTSARs, a continuous syngas feed is converted to a stream of hydrogen and steam, free of carbon dioxide, in one reactor, while concurrently, steam is fed to the other PPTSAR to regenerate the adsorbent and capture the released carbon dioxide. This is realized by coupling a temperature and partial pressure swing between the reaction/adsorption and regeneration steps. These intensified PPTSARs can replace conventional water gas shift packed bed reactors in Integrated Gasification Combined Cycle (IGCC) power plants. The PPTASRs achieve carbon monoxide conversions greater than 98 % with simultaneous carbon dioxide capture. Chapter three presents the detailed derivation of the multidomain PPTSAR model including all employed assumptions used in Chapter two. Chapter four presents a significant performance improvement on the PPTSAR process in Chapter two, by demonstrating the effect of performing the regeneration step counter-currently to the reaction/adsorption step, as opposed to co-currently. As a result, an analogous process using Partial Pressure Swing Adsorptive Reactors (PPSARs) without the need for an additional temperature swing is presented. Finally, Chapters five and six present transient and steady-state multiscale membrane reactor models, respectively
Experiences with HPX on embedded real-time systems
Recently more and more embedded devices use multi-core processors. For example, the current generation of high-end engine-control units exhibit triple-core processors. To reliably exploit the parallelism of these cores, an advanced programming environment is needed, such as the current C++17 Standard, as well as the upcoming C++20 Standard. Using C++ to cooperatively parallelize software is comprehensively investigated, but not in the context of embedded multi-core devices with real-time requirements. For this paper we used two algorithms from Continental AG's powertrain which are characteristic for real-time embedded devices and examined the effect of parallelizing them with C++17/20, represented by HPX as a C++17/20 runtime implementation. Different data sizes were used to increase the execution times of the parallel sections. According to Gustafson's Law, with these increased data sizes, the benefit of parallelization increases and greater speed-ups are possible. When keeping Continental AG's original data sizes, HPX is not able to reduce the execution time of the algorithms