1,495 research outputs found

    Timing verification of interface specifications and controllers

    Full text link
    Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal

    Design and Optimization of In-Cycle Closed-Loop Combustion Control with Multiple Injections

    Get PDF
    With the increasing demand of transportation, biofuels play a fundamental role in the transition to sustainable powertrains. For the increased uncertainty of biofuel combustion properties, advanced combustion control systems have the potential to operate the engine with high flexibility while maintaining a high efficiency and robustness. For that purpose, this thesis investigates the analysis, design, implementation, and application of closed-loop Diesel combustion control algorithms. By fast in-cylinder pressure measurements, the combustion evolution can be monitored to adjust a multi-pulse fuel injection within the same cycle. This is referred to as in-cycle closed-loop combustion control.The design of the controller is based on the experimental characterization of the combustion dynamics by the heat release analysis, improved by the proposed cylinder volume deviation model. The pilot combustion, its robustness and dynamics, and its effects on the main injection were analyzed. The pilot burnt mass significantly affects the main combustion timing and heat release shape, which determines the engine efficiency and emissions. By the feedback of a pilot mass virtual sensor, these variations can be compensated by the closed-loop feedback control of the main injection. Predictive models are introduced to overcome the limitations imposed by the intrinsic delay between the control action (fuel injection) and output measurements (pressure increase). High prediction accuracy is possible by the on-line model adaptation, where a reduced multi-cylinder method is proposed to reduce their complexity. The predictive control strategy permits to reduce the stochastic cyclic variations of the controlled combustion metrics. In-cycle controllability of the combustion requires simultaneous observability of the pilot combustion and control authority of the main injection. The imposition of this restriction may decrease the indicated efficiency and increase the operational constraints violation compared to open-loop operation. This is especially significant for pilot misfire. For in-cycle detection of pilot misfire, stochastic and deterministic methods were investigated. The on-line pilot misfire diagnosis was feedback for its compensation by a second pilot injection. High flexibility on the combustion control strategy was achieved by a modular design of the controller. A finite-state machine was investigated for the synchronization of the feedback signals (measurements and model-based predictions), active controller and output action. The experimental results showed an increased tracking error performance and shorter transients, regardless of operating conditions and fuel used.To increase the indicated efficiency, direct and indirect optimization methods for the combustion control were investigated. An in-cycle controller to reach the maximum indicated efficiency increased it by +0.42%unit. The indirect method took advantage of the reduced cyclic variations to optimize the indicated efficiency under constraints on hardware and emission limits. By including the probability and in-cycle compensation of pilot misfire, the optimization of the set-point reference of CA50 increased the indicated efficiency by +0.6unit at mid loads, compared to open-loop operation.Tools to evaluate the total cost of the system were provided by the quantification of the hardware requirements for each of the controller modules

    Vérification temporelle des systèmes cycliques et acycliques basée sur l’analyse des contraintes

    Full text link
    Nous présentons une nouvelle approche pour formuler et calculer le temps de séparation des événements utilisé dans l’analyse et la vérification de différents systèmes cycliques et acycliques sous des contraintes linéaires-min-max avec des composants ayant des délais finis et infinis. Notre approche consiste à formuler le problème sous la forme d’un programme entier mixte, puis à utiliser le solveur Cplex pour avoir les temps de séparation entre les événements. Afin de démontrer l’utilité en pratique de notre approche, nous l’avons utilisée pour la vérification et l’analyse d’une puce asynchrone d’Intel de calcul d’équations différentielles. Comparée aux travaux précédents, notre approche est basée sur une formulation exacte et elle permet non seulement de calculer le maximum de séparation, mais aussi de trouver un ordonnancement cyclique et de calculer les temps de séparation correspondant aux différentes périodes possibles de cet ordonnancement.We present a new approach for formulating and computing time separation of events used for timing analysis of different types of cyclic and acyclic systems that obey to linear-min-max type constraints with finite and infinite bounded component delays. Our approach consists of formulating the problem as a mixed integer program then using the solver Cplex to get time separations between events. In order to demonstrate the practical use of our approach we apply it for the verification and analysis of an Intel asynchronous differential equation solver chip. Compared to previous work, our approach is based on exact formulation and it allows not only the maximum separation computing, but it can also provide cyclic schedules and compute bound on possible periods of such schedules

    Adaptive Imaging with a Cylindrical, Time-Encoded Imaging System

    Full text link
    Most imaging systems for terrestrial nuclear imaging are static in that the design of the system and the data acquisition protocol are defined prior to the experiment. Often, these systems are designed for general use and not optimized for any specific task. The core concept of adaptive imaging is to modify the imaging system during a measurement based on collected data. This enables scenario specific adaptation of the imaging system which leads to better performance for a given task. This dissertation presents the first adaptive, cylindrical, time-encoded imaging (c-TEI) system and evaluates its performance on tasks relevant to nuclear non-proliferation and international safeguards. We explore two methods of adaptation of a c-TEI system, adaptive detector movements and adaptive mask movements, and apply these methods to three tasks, improving angular resolution, detecting a weak source in the vicinity of a strong source, and reconstructing complex source scenes. The results indicate that adaptive imaging significantly improves performance in each case. For the MATADOR imager, we find that adaptive detector movements improve the angular resolution of a point source by 20% and improve the angular resolution of two point sources by up to 50%. For the problem of detecting a weak source in the vicinity of a strong source, we find that adaptive mask movements achieve the same detection performance as a similar, non-adaptive system in 20%-40% less time, depending on the relative position of the weak source. Additionally, we developed an adaptive detection algorithm that doubles the probability of detection of the weak source at a 5% false-alarm rate. Finally, we applied adaptive imaging concepts to reconstruct complex arrangements of special nuclear material at Idaho National Laboratory. We find that combining data from multiple detector positions improves image uniformity of extended sources by 38% and reduces the background noise by 50%. We also demonstrate 2D (azimuthal and radial) imaging in a crowded source scene. These promising experimental results highlight the potential for adaptive imaging using a c-TEI system and motivate further research toward specific, real-world applications.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163009/1/nirpshah_1.pd

    Model-based Design, Operation and Control of Pressure Swing Adsorption Systems

    No full text
    This thesis is concerned with the design, operation and control of pressure swing adsorption (PSA) systems, employing state of the art system engineering tools. A detailed mathematical model is developed which captures the hydrodynamic, mass transfer and equilibrium effects in detail to represent the real PSA operation. The first detailed case study presented in this work deals with the design of an explicit/multi-parametric model predictive controller for the operation of a PSA system comprising four adsorbent beds undergoing nine process steps, separating 70 % H2, 30 % CH4 mixture into high purity hydrogen. The key controller objective is to fast track H2 purity to a set point value of 99.99 %, manipulating time duration of the adsorption step, under the effect of process disturbances. To perform the task, a rigorous and systematic framework is employed comprising four main steps of model development, system identification, the mp-MPC formulation, and in-silico closed loop validation, respectively. Detailed comparison studies of the derived explicit MPC controller are also performed with the conventional PID controllers, for a multitude of disturbance scenarios. Following the controller design, a detailed design and control optimization study is presented which incorporates the design, operational and control aspects of PSA operation simultaneously, with the objective of improving real time operability. This is in complete contrast to the traditional approach for the design of process systems, which employs a two step sequential method of first design and then control. A systematic and rigorous methodology is employed towards this purpose and is applied to a two-bed, six-step PSA system represented by a rigorous mathematical model, where the key optimization objective is to maximize the expected H2 recovery while achieving a closed loop product H2 purity of 99.99 %, for separating 70 % H2, 30 % CH4 feed. Furthermore, two detailed comparative studies are also conducted. In the first study, the optimal design and control configuration obtained from the simultaneous and sequential approaches are compared in detail. In the second study, an mp-MPC controller is designed to investigate any further improvements in the closed loop response of the optimal PSA system. The final area of research work is related to the development of an industrial scale, integrated PSA-membrane separation system. Here, the key objective is to enhance the overall recovery of "fuel cell ready" 99.99 % pure hydrogen, produced from the steam methane reforming route, where PSA is usually employed as the purification system. In the first stage, the stand-alone PSA and membrane configurations are optimized performing dynamic simulations on the mathematical model. During this procedure, both upstream and downstream membrane configuration are investigated in detail. For the hybrid configuration, membrane area and PSA cycle time are chosen as the key design parameters. Furthermore, life cycle analysis studies are performed on the hybrid system to evaluate its environmental impact in comparison to the stand-alone PSA system

    Modeling and checking Real-Time system designs

    Get PDF
    Real-time systems are found in an increasing variety of application fields. Usually, they are embedded systems controlling devices that may risk lives or damage properties: they are safety critical systems. Hard Real-Time requirements (late means wrong) make the development of such kind of systems a formidable and daunting task. The need to predict temporal behavior of critical real-time systems has encouraged the development of an useful collection of models, results and tools for analyzing schedulability of applications (e.g., [log]). However, there is no general analytical support for verifying other kind of high level timing requirements on complex software architectures. On the other hand, the verification of specifications and designs of real-time systems has been considered an interesting application field for automatic analysis techniques such as model-checking. Unfortunately, there is a natural trade-off between sophistication of supported features and the practicality of formal analysis. To cope with the challenges of formal analysis real-time system designs we focus on three aspects that, we believe, are fundamental to get practical tools: model-generation, modelreduction and model-checking. Then, firstly, we extend our ideas presented in [30] and develop an automatic approach to model and verify designs of real-time systems for complex timing requirements based on scheduling theory and timed automata theory [7] (a wellknown and studied formalism to model and verify timed systems). That is, to enhance practicality of formal analysis, we focus our analysis on designs adhering to Fixed-Priority scheduling. In essence, we exploit known scheduling theory to automatically derive simple and compositional formal models. To the best of our knowledge, this is the first proposal to integrate scheduling theory into the framework of automatic formal verification. To model such systems, we present I/O Timed Components, a notion and discipline to build non-blocking live timed systems. I/O Timed Components, which are build on top of Timed Automata, provide other important methodological advantages like influence detection or compositional reasoning. Secondly, we provide a battery of automatic and rather generic abstraction techniques that, given a requirement to be analyzed, reduces the model while preserving the relevant behaviors to check it. Thus, we do not feed the verification tools with the whole model as previous formal approaches. To provide arguments about the correctness of those abstractions, we present a notion of Continuous Observational Bismulation that is weaker than strong timed bisimulation yet preserving many well-known logics for timed systems like TCTL [3]. Finally, since we choose timed automata as formal kernel, we adapt and apply their deeply studied and developed analysis theory, as well as their practical tools. Moreover, we also describe from scratch an algorithm to model-check duration properties, a feature that is not addressed by available tools. That algorithm extends the one presented in [28].Fil:Braberman, Víctor Adrián. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Approximate Algorithms for the Combined arrival-Departure Aircraft Sequencing and Reactive Scheduling Problems on Multiple Runways

    Get PDF
    The problem addressed in this dissertation is the Aircraft Sequencing Problem (ASP) in which a schedule must be developed to determine the assignment of each aircraft to a runway, the appropriate sequence of aircraft on each runway, and their departing or landing times. The dissertation examines the ASP over multiple runways, under mixed mode operations with the objective of minimizing the total weighted tardiness of aircraft landings and departures simultaneously. To prevent the dangers associated with wake-vortex effects, separation times enforced by Aviation Administrations (e.g., FAA) are considered, adding another level of complexity given that such times are sequence-dependent. Due to the problem being NP-hard, it is computationally difficult to solve large scale instances in a reasonable amount of time. Therefore, three greedy algorithms, namely the Adapted Apparent Tardiness Cost with Separation and Ready Times (AATCSR), the Earliest Ready Time (ERT) and the Fast Priority Index (FPI) are proposed. Moreover, metaheuristics including Simulated Annealing (SA) and the Metaheuristic for Randomized Priority Search (Meta-RaPS) are introduced to improve solutions initially constructed by the proposed greedy algorithms. The performance (solution quality and computational time) of the various algorithms is compared to the optimal solutions and to each other. The dissertation also addresses the Aircraft Reactive Scheduling Problem (ARSP) as air traffic systems frequently encounter various disruptions due to unexpected events such as inclement weather, aircraft failures or personnel shortages rendering the initial plan suboptimal or even obsolete in some cases. This research considers disruptions including the arrival of new aircraft, flight cancellations and aircraft delays. ARSP is formulated as a multi-objective optimization problem in which both the schedule\u27s quality and stability are of interest. The objectives consist of the total weighted start times (solution quality), total weighted start time deviation, and total weighted runway deviation (instability measures). Repair and complete regeneration approximate algorithms are developed for each type of disruptive events. The algorithms are tested against difficult benchmark problems and the solutions are compared to optimal solutions in terms of solution quality, schedule stability and computational time
    • …
    corecore