13,716 research outputs found

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    Accuracy, Efficiency, and Parallelism in Network Target Coordination Optimization

    Get PDF
    The optimal design task of complex engineering systems requires knowledge in various domains. It is thus often split into smaller parts and assigned to different design teams with specialized backgrounds. Decomposition based optimization is a multidisciplinary design optimization (MDO) technique that models and improves this process by partitioning the whole design optimization task into many manageable sub-problems. These sub-problems can be treated separately and a coordination strategy is employed to coordinate their couplings and drive their individual solutions to a consistent overall optimum. Many methods have been proposed in the literature, applying mathematical theories in nonlinear programming to decomposition based optimization, and testing them on engineering problems. These methods include Analytical Target Cascading (ATC) using quadratic methods and Augmented Lagrangian Coordination (ALC) using augmented Lagrangian relaxation. The decomposition structure has also been expanded from the special hierarchical structure to the general network structure. However, accuracy, efficiency, and parallelism still remain the focus of decomposition based optimization research when dealing with complex problems and more work is needed to both improve the existing methods and develop new methods. In this research, a hybrid network partition in which additional sub-problems can either be disciplines or components added to a component or discipline network respectively is proposed and two hybrid test problems are formulated. The newly developed consensus optimization method is applied on these test problems and shows good performance. For the ALC method, when the problem partition is given, various alternative structures are analyzed and compared through numerical tests. A new theory of dual residual based on Karush-Kuhn-Tucker (KKT) conditions is developed, which leads to a new flexible weight update strategy for both centralized and distributed ALC. Numerical tests show that the optimization accuracy is greatly improved by considering the dual residual in the iteration process. Furthermore, the ALC using the new update is able to converge to a good solution starting with various initial weights while the traditional update fails to guide the optimization to a reasonable solution when the initial weight is outside of a narrow range. Finally, a new coordination method is developed in this research by utilizing both the ordinary Lagrangian duality theorem and the alternating direction method of multipliers (ADMM). Different from the methods in the literature which employ duality theorems just once, the proposed method uses duality theorems twice and the resulting algorithm can optimize all sub-problems in parallel while requiring the least copies of linking variables. Numerical tests show that the new method consistently reaches more accurate solutions and consumes less computational resources when compared to another popular parallel method, the centralized ALC

    Data-Driven Methods and Applications for Optimization under Uncertainty and Rare-Event Simulation

    Full text link
    For most of decisions or system designs in practice, there exist chances of severe hazards or system failures that can be catastrophic. The occurrence of such hazards is usually uncertain, and hence it is important to measure and analyze the associated risks. As a powerful tool for estimating risks, rare-event simulation techniques are used to improve the efficiency of the estimation when the risk occurs with an extremely small probability. Furthermore, one can utilize the risk measurements to achieve better decisions or designs. This can be achieved by modeling the task into a chance constrained optimization problem, which optimizes an objective with a controlled risk level. However, recent problems in practice have become more data-driven and hence brought new challenges to the existing literature in these two domains. In this dissertation, we will discuss challenges and remedies in data-driven problems for rare-event simulation and chance constrained problems. We propose a robust optimization based framework for approaching chance constrained optimization problems under a data-driven setting. We also analyze the impact of tail uncertainty in data-driven rare-event simulation tasks. On the other hand, due to recent breakthroughs in machine learning techniques, the development of intelligent physical systems, e.g. autonomous vehicles, have been actively investigated. Since these systems can cause catastrophes to public safety, the evaluation of their machine learning components and system performance is crucial. This dissertation will cover problems arising in the evaluation of such systems. We propose an importance sampling scheme for estimating rare events defined by machine learning predictors. Lastly, we discuss an application project in evaluating the safety of autonomous vehicle driving algorithms.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163270/1/zhyhuang_1.pd

    Self-resilient production systems : framework for design synthesis of multi-station assembly systems

    Get PDF
    Product design changes are inevitable in the current trend of time-based competition where product models such as automotive bodies and aircraft fuselages are frequently upgraded and cause assembly process design changes. In recent years, several studies in engineering change management and reconfigurable systems have been conducted to address the challenges of frequent product and process design changes. However, the results of these studies are limited in their applications due to shortcomings in three aspects which are: (i) They rely heavily on past records which might only be a few relevant cases and insufficient to perform a reliable analysis; (ii) They focus mainly on managing design changes in product architecture instead of both product and process architecture; and (iii) They consider design changes at a station-level instead of a multistation level. To address the aforementioned challenges, this thesis proposes three interrelated research areas to simulate the design adjustments of the existing process architecture. These research areas involve: (i) the methodologies to model the existing process architecture design in order to use the developed models as assembly response functions for assessing Key Performance Indices (KPIs); (ii) the KPIs to assess quality, cost, and design complexity of the existing process architecture design which are used when making decisions to change the existing process architecture design; and (iii) the methodology to change the process architecture design to new optimal design solutions at a multi-station level. In the first research area, the methodology in modeling the functional dependence of process variables within the process architecture design are presented as well as the relations from process variables and product architecture design. To understand the engineering change propagation chain among process variables within the process architecture design, a functional dependence model is introduced to represent the design dependency among process variables by cascading relationships from customer requirements, product architecture, process architecture, and design tasks to optimise process variable design. This model is used to estimate the level of process variable design change propagation in the existing process architecture design Next, process yield, cost, and complexity indices are introduced and used as KPIs in this thesis to measure product quality, cost in changing the current process design, and dependency of process variables (i.e, change propagation), respectively. The process yield and complexity indices are obtained by using the Stream-of-Variation (SOVA) model and functional dependence model, respectively. The costing KPI is obtained by determining the cost in optimizing tolerances of process variables. The implication of the costing KPI on the overall cost in changing process architecture design is also discussed. These three comprehensive indices are used to support decision-making when redesigning the existing process architecture. Finally, the framework driven by functional optimisation is proposed to adjust the existing process architecture to meet the engineering change requirements. The framework provides a platform to integrate and analyze several individual design synthesis tasks which are necessary to optimise the multi-stage assembly processes such as tolerance of process variables, fixture layouts, or part-to-part joints. The developed framework based on transversal of hypergraph and task connectivity matrix which lead to the optimal sequence of these design tasks. In order to enhance visibility on the dependencies and hierarchy of design tasks, Design Structure Matrix and Task Flow Chain are also adopted. Three scenarios of engineering changes in industrial automotive design are used to illustrate the application of the proposed redesign methodology. The thesis concludes that it is not necessary to optimise all functional designs of process variables to accommodate the engineering changes. The selection of only relevant functional designs is sufficient, but the design optimisation of the process variables has to be conducted at the system level with consideration of dependency between selected functional designs

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Adaptive Gradient Assisted Robust Optimization with Applications to LNG Plant Enhancement

    Get PDF
    About 8% of the natural gas feed to a Liquefied Natural Gas (LNG) plant is consumed for liquefaction. A significant challenge in optimizing engineering systems, including LNG plants, is the issue of uncertainty. To exemplify, each natural gas field has a different gas composition, which imposes an important uncertainty in LNG plant design. One class of optimization techniques that can handle uncertainty is robust optimization. A robust optimum is one that is both optimum and relatively insensitive to the uncertainty. For instance, a mobile LNG plant should be both energy efficient and its performance be insensitive to the natural gas composition. In this dissertation to enhance the energy efficiency of the LNG plants, first, several new options are investigated. These options involve both liquefaction cycle enhancements and driver cycle (i.e., power plant) enhancements. Two new liquefaction cycle enhancement options are proposed and studied. For enhancing the diver cycle performance, ten novel driver cycle configurations for propane pre-cooled mixed refrigerant cycles are proposed, explored and compared with five different conventional driver cycle options. Also, two novel robust optimization techniques applicable to black-box engineering problems are developed. The first method is called gradient assisted robust optimization (GARO) that has a built-in numerical verification scheme. The other method is called quasi-concave gradient assisted robust optimization (QC-GARO). QC-GARO has a built-in robustness verification that is tailored for problems with quasi-concave functions with respect to uncertain variables. The performance of GARO and QC-GARO methods is evaluated by using seventeen numerical and engineering test problems and comparing their results against three previous methods from the literature. Based on the results it was found that, compared to the previous considered methods, GARO was the only one that could solve all test problems but with a higher computational effort compared to QC-GARO. QC-GARO's computational cost was in the same order of magnitude as the fastest previous method from the literature though it was not able to solve all the test problems. Lastly the GARO robust optimization method is used to devise a refrigerant for LNG plants that is relatively insensitive to the uncertainty from natural gas mixture composition
    corecore