857 research outputs found

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications

    Birnbaum Importance Patterns and Their Applications in the Component Assignment Problem

    Get PDF
    The Birnbaum importance (BI) is a well-known measure that evaluates the relative contribution of components to system reliability. It has been successfully applied to tackling some reliability problems. This dissertation investigates two topics related to the BI in the reliability field: the patterns of component BIs and the BI-based heuristics and meta-heuristics for solving the component assignment problem (CAP).There exist certain patterns of component BIs (i.e., the relative order of the BI values to the individual components) for linear consecutive-k-out-of-n (Lin/Con/k/n) systems when all components have the same reliability p. This study summarizes and annotates the existing BI patterns for Lin/Con/k/n systems, proves new BI patterns conditioned on the value of p, disproves some patterns that were conjectured or claimed in the literature, and makes new conjectures based on comprehensive computational tests and analysis. More importantly, this study defines a concept of segment in Lin/Con/k/n systems for analyzing the BI patterns, and investigates the relationship between the BI and the common component reliability p and the relationship between the BI and the system size n. One can then use these relationships to further understand the proved, disproved, and conjectured BI patterns.The CAP is to find the optimal assignment of n available components to n positions in a system such that the system reliability is maximized. The ordering of component BIs has been successfully used to design heuristics for the CAP. This study proposes five new BI-based heuristics and discusses their corresponding properties. Based on comprehensive numerical experiments, a BI-based two-stage approach (BITA) is proposed for solving the CAP with each stage using different BI-based heuristics. The two-stage approach is much more efficient and capable to generate solutions of higher quality than the GAMS/CoinBonmin solver and a randomization method.This dissertation then presents a meta-heuristic, i.e., a BI-based genetic local search (BIGLS) algorithm, for the CAP in which a BI-based local search is embedded into the genetic algorithm. Comprehensive numerical experiments show the robustness and effectiveness of the BIGLS algorithm and especially its advantages over the BITA in terms of solution quality

    ADVANCES IN SYSTEM RELIABILITY-BASED DESIGN AND PROGNOSTICS AND HEALTH MANAGEMENT (PHM) FOR SYSTEM RESILIENCE ANALYSIS AND DESIGN

    Get PDF
    Failures of engineered systems can lead to significant economic and societal losses. Despite tremendous efforts (e.g., $200 billion annually) denoted to reliability and maintenance, unexpected catastrophic failures still occurs. To minimize the losses, reliability of engineered systems must be ensured throughout their life-cycle amidst uncertain operational condition and manufacturing variability. In most engineered systems, the required system reliability level under adverse events is achieved by adding system redundancies and/or conducting system reliability-based design optimization (RBDO). However, a high level of system redundancy increases a system's life-cycle cost (LCC) and system RBDO cannot ensure the system reliability when unexpected loading/environmental conditions are applied and unexpected system failures are developed. In contrast, a new design paradigm, referred to as resilience-driven system design, can ensure highly reliable system designs under any loading/environmental conditions and system failures while considerably reducing systems' LCC. In order to facilitate the development of formal methodologies for this design paradigm, this research aims at advancing two essential and co-related research areas: Research Thrust 1 - system RBDO and Research Thrust 2 - system prognostics and health management (PHM). In Research Thrust 1, reliability analyses under uncertainty will be carried out in both component and system levels against critical failure mechanisms. In Research Thrust 2, highly accurate and robust PHM systems will be designed for engineered systems with a single or multiple time-scale(s). To demonstrate the effectiveness of the proposed system RBDO and PHM techniques, multiple engineering case studies will be presented and discussed. Following the development of Research Thrusts 1 and 2, Research Thrust 3 - resilience-driven system design will establish a theoretical basis and design framework of engineering resilience in a mathematical and statistical context, where engineering resilience will be formulated in terms of system reliability and restoration and the proposed design framework will be demonstrated with a simplified aircraft control actuator design problem

    Fail Over Strategy for Fault Tolerance in Cloud Computing Environment

    Get PDF
    YesCloud fault tolerance is an important issue in cloud computing platforms and applications. In the event of an unexpected system failure or malfunction, a robust fault-tolerant design may allow the cloud to continue functioning correctly possibly at a reduced level instead of failing completely. To ensure high availability of critical cloud services, the application execution and hardware performance, various fault tolerant techniques exist for building self-autonomous cloud systems. In comparison to current approaches, this paper proposes a more robust and reliable architecture using optimal checkpointing strategy to ensure high system availability and reduced system task service finish time. Using pass rates and virtualised mechanisms, the proposed Smart Failover Strategy (SFS) scheme uses components such as Cloud fault manager, Cloud controller, Cloud load balancer and a selection mechanism, providing fault tolerance via redundancy, optimized selection and checkpointing. In our approach, the Cloud fault manager repairs faults generated before the task time deadline is reached, blocking unrecoverable faulty nodes as well as their virtual nodes. This scheme is also able to remove temporary software faults from recoverable faulty nodes, thereby making them available for future request. We argue that the proposed SFS algorithm makes the system highly fault tolerant by considering forward and backward recovery using diverse software tools. Compared to existing approaches, preliminary experiment of the SFS algorithm indicate an increase in pass rates and a consequent decrease in failure rates, showing an overall good performance in task allocations. We present these results using experimental validation tools with comparison to other techniques, laying a foundation for a fully fault tolerant IaaS Cloud environment

    Contribution to reliable control of dynamic systems

    Get PDF
    This thesis presents sorne contributions to the field of Health-Aware Control (HAC) of dynamic systems. In the first part of this thesis, a review of the concepts and methodologies related to reliability versus degradation and fault tolerant control versus health-aware control is presented. Firstly, in an attempt to unify concepts, an overview of HAC, degradation, and reliability modeling including some of the most relevant theoretical and applied contributions is given. Moreover, reliability modeling is formalized and exemplified using the structure function, Bayesian networks (BNs) and Dynamic Bayesian networks (DBNs) as modeling tools in reliability analysis. In addition, some Reliability lmportance Measures (RIMs) are presented. In particular, this thesis develops BNs models for overall system reliability analysis through the use of Bayesian inference techniques. Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. For the HAC scheme implementation, this thesis presents and discusses the integration of actuators health information by means of RIMs and degradation in Model Predictive Control (MPC) and Linear Quadratic Regulator algorithms. In the proposed strategies, the cost function parameters are tuned using RIMs. The methodology is able to avoid the occurrence of catastrophic and incipient faults by monitoring the overall system reliability. The proposed HAC strategies are applied to a Drinking Water Network (DWN) and a multirotor UAV system. Moreover, a third approach, which uses MPC and restricts the degradation of the system components is applied to a twin rotor system. Finally, this thesis presents and discusses two reliability interpretations. These interpretations, namely instantaneous and expected, differ in the manner how reliability is evaluated and how its evolution along time is considered. This comparison is made within a HAC framework and studies the system reliability under both approaches.Aquesta tesi presenta algunes contribucions al camp del control basat en la salut dels components "Health-Aware Control" (HAC) de sistemes dinàmics. A la primera part d'aquesta tesi, es presenta una revisió dels conceptes i metodologies relacionats amb la fiabilitat versus degradació, el control tolerant a fallades versus el HAC. En primer lloc, i per unificar els conceptes, s'introdueixen els conceptes de degradació i fiabilitat, models de fiabilitat i de HAC incloent algunes de les contribucions teòriques i aplicades més rellevants. La tesi, a més, el modelatge de la fiabilitat es formalitza i exemplifica utilitzant la funció d'estructura del sistema, xarxes bayesianes (BN) i xarxes bayesianes dinamiques (DBN) com a eines de modelat i anàlisi de la fiabilitat com també presenta algunes mesures d'importància de la fiabilitat (RIMs). En particular, aquesta tesi desenvolupa models de BNs per a l'anàlisi de la fiabilitat del sistema a través de l'ús de tècniques d'inferència bayesiana. Les xarxes bayesianes són eines poderoses en l'avaluació de la fiabilitat del sistema gràcies a la seva flexibilitat en el modelat de la fiabilitat de sistemes complexos. Per a la implementació de l?esquema de HAC, aquesta tesi presenta i discuteix la integració de la informació sobre la salut i degradació dels actuadors mitjançant les RIMs en algoritmes de control predictiu basat en models (MPC) i control lineal quadràtic (LQR). En les estratègies proposades, els paràmetres de la funció de cost s'ajusten utilitzant els RIMs. Aquestes tècniques de control fiable permetran millorar la disponibilitat i la seguretat dels sistemes evitant l'aparició de fallades a través de la incorporació d'aquesta informació de la salut dels components en l'algoritme de control. Les estratègies de HAC proposades s'apliquen a una xarxa d'aigua potable (DWN) i a un sistema UAV multirrotor. A més, un tercer enfocament fent servir la degradació dels actuadors com a restricció dins l'algoritme de control MPC s'aplica a un sistema aeri a dos graus de llibertat (TRMS). Finalment, aquesta tesi també presenta i discuteix dues interpretacions de la fiabilitat. Aquestes interpretacions, nomenades instantània i esperada, difereixen en la forma en què s'avalua la fiabilitat i com es considera la seva evolució al llarg del temps. Aquesta comparació es realitza en el marc del control HAC i estudia la fiabilitat del sistema en tots dos enfocaments.Esta tesis presenta algunas contribuciones en el campo del control basado en la salud de los componentes “Health-Aware Control” (HAC) de sistemas dinámicos. En la primera parte de esta tesis, se presenta una revisión de los conceptos y metodologíasrelacionados con la fiabilidad versus degradación, el control tolerante a fallos versus el HAC. En primer lugar, y para unificar los conceptos, se introducen los conceptos de degradación y fiabilidad, modelos de fiabilidad y de HAC incluyendo algunas de las contribuciones teóricas y aplicadas más relevantes. La tesis, demás formaliza y ejemplifica el modelado de fiabilidad utilizando la función de estructura del sistema, redes bayesianas (BN) y redes bayesianas diná-micas (DBN) como herramientas de modelado y análisis de fiabilidad como también presenta algunas medidas de importancia de la fiabilidad (RIMs). En particular, esta tesis desarrolla modelos de BNs para el análisis de la fiabilidad del sistema a través del uso de técnicas de inferencia bayesiana. Las redes bayesianas son herramientas poderosas en la evaluación de la fiabilidad del sistema gracias a su flexibilidad en el modelado de la fiabilidad de sistemas complejos. Para la implementación del esquema de HAC, esta tesis presenta y discute la integración de la información sobre la salud y degradación de los actuadores mediante las RIMs en algoritmos de control predictivo basado en modelos (MPC) y del control cuadrático lineal (LQR). En las estrategias propuestas, los parámetros de la función de coste se ajustan utilizando las RIMs. Estas técnicas de control fiable permitirán mejorar la disponibilidad y la seguridad de los sistemas evitando la aparición de fallos a través de la incorporación de la información de la salud de los componentes en el algoritmo de control. Las estrategias de HAC propuestas se aplican a una red de agua potable (DWN) y a un sistema UAV multirotor. Además, un tercer enfoque que usa la degradación de los actuadores como restricción en el algoritmo de control MPC se aplica a un sistema aéreo con dos grados de libertad (TRMS). Finalmente, esta tesis también presenta y discute dos interpretaciones de la fiabilidad. Estas interpretaciones, llamadas instantánea y esperada, difieren en la forma en que se evalúa la fiabilidad y cómo se considera su evolución a lo largo del tiempo. Esta comparación se realiza en el marco del control HAC y estudia la fiabilidad del sistema en ambos enfoques

    Contribution to reliable control of dynamic systems

    Get PDF
    Aplicat embargament des de la data de defensa fins al maig 2020This thesis presents sorne contributions to the field of Health-Aware Control (HAC) of dynamic systems. In the first part of this thesis, a review of the concepts and methodologies related to reliability versus degradation and fault tolerant control versus health-aware control is presented. Firstly, in an attempt to unify concepts, an overview of HAC, degradation, and reliability modeling including some of the most relevant theoretical and applied contributions is given. Moreover, reliability modeling is formalized and exemplified using the structure function, Bayesian networks (BNs) and Dynamic Bayesian networks (DBNs) as modeling tools in reliability analysis. In addition, some Reliability lmportance Measures (RIMs) are presented. In particular, this thesis develops BNs models for overall system reliability analysis through the use of Bayesian inference techniques. Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. For the HAC scheme implementation, this thesis presents and discusses the integration of actuators health information by means of RIMs and degradation in Model Predictive Control (MPC) and Linear Quadratic Regulator algorithms. In the proposed strategies, the cost function parameters are tuned using RIMs. The methodology is able to avoid the occurrence of catastrophic and incipient faults by monitoring the overall system reliability. The proposed HAC strategies are applied to a Drinking Water Network (DWN) and a multirotor UAV system. Moreover, a third approach, which uses MPC and restricts the degradation of the system components is applied to a twin rotor system. Finally, this thesis presents and discusses two reliability interpretations. These interpretations, namely instantaneous and expected, differ in the manner how reliability is evaluated and how its evolution along time is considered. This comparison is made within a HAC framework and studies the system reliability under both approaches.Aquesta tesi presenta algunes contribucions al camp del control basat en la salut dels components "Health-Aware Control" (HAC) de sistemes dinàmics. A la primera part d'aquesta tesi, es presenta una revisió dels conceptes i metodologies relacionats amb la fiabilitat versus degradació, el control tolerant a fallades versus el HAC. En primer lloc, i per unificar els conceptes, s'introdueixen els conceptes de degradació i fiabilitat, models de fiabilitat i de HAC incloent algunes de les contribucions teòriques i aplicades més rellevants. La tesi, a més, el modelatge de la fiabilitat es formalitza i exemplifica utilitzant la funció d'estructura del sistema, xarxes bayesianes (BN) i xarxes bayesianes dinamiques (DBN) com a eines de modelat i anàlisi de la fiabilitat com també presenta algunes mesures d'importància de la fiabilitat (RIMs). En particular, aquesta tesi desenvolupa models de BNs per a l'anàlisi de la fiabilitat del sistema a través de l'ús de tècniques d'inferència bayesiana. Les xarxes bayesianes són eines poderoses en l'avaluació de la fiabilitat del sistema gràcies a la seva flexibilitat en el modelat de la fiabilitat de sistemes complexos. Per a la implementació de l?esquema de HAC, aquesta tesi presenta i discuteix la integració de la informació sobre la salut i degradació dels actuadors mitjançant les RIMs en algoritmes de control predictiu basat en models (MPC) i control lineal quadràtic (LQR). En les estratègies proposades, els paràmetres de la funció de cost s'ajusten utilitzant els RIMs. Aquestes tècniques de control fiable permetran millorar la disponibilitat i la seguretat dels sistemes evitant l'aparició de fallades a través de la incorporació d'aquesta informació de la salut dels components en l'algoritme de control. Les estratègies de HAC proposades s'apliquen a una xarxa d'aigua potable (DWN) i a un sistema UAV multirrotor. A més, un tercer enfocament fent servir la degradació dels actuadors com a restricció dins l'algoritme de control MPC s'aplica a un sistema aeri a dos graus de llibertat (TRMS). Finalment, aquesta tesi també presenta i discuteix dues interpretacions de la fiabilitat. Aquestes interpretacions, nomenades instantània i esperada, difereixen en la forma en què s'avalua la fiabilitat i com es considera la seva evolució al llarg del temps. Aquesta comparació es realitza en el marc del control HAC i estudia la fiabilitat del sistema en tots dos enfocaments.Esta tesis presenta algunas contribuciones en el campo del control basado en la salud de los componentes “Health-Aware Control” (HAC) de sistemas dinámicos. En la primera parte de esta tesis, se presenta una revisión de los conceptos y metodologíasrelacionados con la fiabilidad versus degradación, el control tolerante a fallos versus el HAC. En primer lugar, y para unificar los conceptos, se introducen los conceptos de degradación y fiabilidad, modelos de fiabilidad y de HAC incluyendo algunas de las contribuciones teóricas y aplicadas más relevantes. La tesis, demás formaliza y ejemplifica el modelado de fiabilidad utilizando la función de estructura del sistema, redes bayesianas (BN) y redes bayesianas diná-micas (DBN) como herramientas de modelado y análisis de fiabilidad como también presenta algunas medidas de importancia de la fiabilidad (RIMs). En particular, esta tesis desarrolla modelos de BNs para el análisis de la fiabilidad del sistema a través del uso de técnicas de inferencia bayesiana. Las redes bayesianas son herramientas poderosas en la evaluación de la fiabilidad del sistema gracias a su flexibilidad en el modelado de la fiabilidad de sistemas complejos. Para la implementación del esquema de HAC, esta tesis presenta y discute la integración de la información sobre la salud y degradación de los actuadores mediante las RIMs en algoritmos de control predictivo basado en modelos (MPC) y del control cuadrático lineal (LQR). En las estrategias propuestas, los parámetros de la función de coste se ajustan utilizando las RIMs. Estas técnicas de control fiable permitirán mejorar la disponibilidad y la seguridad de los sistemas evitando la aparición de fallos a través de la incorporación de la información de la salud de los componentes en el algoritmo de control. Las estrategias de HAC propuestas se aplican a una red de agua potable (DWN) y a un sistema UAV multirotor. Además, un tercer enfoque que usa la degradación de los actuadores como restricción en el algoritmo de control MPC se aplica a un sistema aéreo con dos grados de libertad (TRMS). Finalmente, esta tesis también presenta y discute dos interpretaciones de la fiabilidad. Estas interpretaciones, llamadas instantánea y esperada, difieren en la forma en que se evalúa la fiabilidad y cómo se considera su evolución a lo largo del tiempo. Esta comparación se realiza en el marco del control HAC y estudia la fiabilidad del sistema en ambos enfoques.Postprint (published version

    Risk-based reliability allocation at component level in non-repairable systems by using evolutionary algorithm

    Get PDF
    The approach for setting system reliability in the risk-based reliability allocation (RBRA) method is driven solely by the amount of ‘total losses’ (sum of reliability investment and risk of failure) associated with a non-repairable system failure. For a system consisting of many components, reliability allocation by RBRA method becomes a very complex combinatorial optimisation problem particularly if large numbers of alternatives, with different levels of reliability and associated cost, are considered for each component. Furthermore, the complexity of this problem is magnified when the relationship between cost and reliability assumed to be nonlinear and non-monotone. An optimisation algorithm (OA) is therefore developed in this research to demonstrate the solution for such difficult problems. The core design of the OA originates from the fundamental concepts of basic Evolutionary Algorithms which are well known for emulating Natural process of evolution in solving complex optimisation problems through computer simulations of the key genetic operations such as 'reproduction', ‘crossover’ and ‘mutation’. However, the OA has been designed with significantly different model of evolution (for identifying valuable parent solutions and subsequently turning them into even better child solutions) compared to the classical genetic model for ensuring rapid and efficient convergence of the search process towards an optimum solution. The vital features of this OA model are 'generation of all populations (samples) with unique chromosomes (solutions)', 'working exclusively with the elite chromosomes in each iteration' and 'application of prudently designed genetic operators on the elite chromosomes with extra emphasis on mutation operation'. For each possible combination of alternatives, both system reliability and cost of failure is computed by means of Monte-Carlo simulation technique. For validation purposes, the optimisation algorithm is first applied to solve an already published reliability optimisation problem with constraint on some target level of system reliability, which is required to be achieved at a minimum system cost. After successful validation, the viability of the OA is demonstrated by showing its application in optimising four different non-repairable sample systems in view of the risk based reliability allocation method. Each system is assumed to have discrete choice of component data set, showing monotonically increasing cost and reliability relationship among the alternatives, and a fixed amount associated with cost of failure. While this optimisation process is the main objective of the research study, two variations are also introduced in this process for the purpose of undertaking parametric studies. To study the effects of changes in the reliability investment on system reliability and total loss, the first variation involves using a different choice of discrete data set exhibiting a non-monotonically increasing relationship between cost and reliability among the alternatives. To study the effects of risk of failure, the second variation in the optimisation process is introduced by means of a different cost of failure amount, associated with a given non-repairable system failure. The optimisation processes show very interesting results between system reliability and total loss. For instance, it is observed that while maximum reliability can generally be associated with high total loss and low risk of failure, the minimum observed value of the total loss is not always associated with minimum system reliability. Therefore, the results exhibit various levels of system reliability and total loss with both values showing strong sensitivity towards the selected combination of component alternatives. The first parametric study shows that second data set (nonmonotone) creates more opportunities for the optimisation process for producing better values of the loss function since cheaper components with higher reliabilities can be selected with higher probabilities. In the second parametric study, it can be seen that the reduction in the cost of failure amount reduces the size of risk of failure which also increases the chances of using cheaper components with lower levels of reliability hence producing lower values of the loss functions. The research study concludes that the risk-based reliability allocation method together with the optimisation algorithm can be used as a powerful tool for highlighting various levels of system reliabilities with associated total losses for any given system in consideration. This notion can be further extended in selecting optimal system configuration from various competing topologies. With such information to hand, reliability engineers can streamline complicated system designs in view of the required level of system reliability with minimum associated total cost of premature failure. In all cases studied, the run time of the optimisation algorithm increases linearly with the complexity of the algorithm and due to its unique model of evolution, it appears to conduct very detailed multi-directional search across the solution space in fewer generations - a very important attribute for solving the kind of problem studied in this research. Consequently, it converges rapidly towards optimum solution unlike the classical genetic algorithm which gradually reaches the optimum, when successful. The research also identifies key areas for future development with the scope to expand in various other dimensions due to its interdisciplinary applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Architecting Fail-Safe Supply Chains / Networks

    Get PDF
    Disruptions are large-scale stochastic events that rarely happen but have a major effect on supply networks’ topology. Some examples include: air traffic being suspended due to weather or terrorism, labor unions strike, sanctions imposed or lifted, company mergers, etc. Variations are small-scale stochastic events that frequently happen but only have a trivial effect on the efficiency of flow planning in supply networks. Some examples include: fluctuations in market demands (e.g. demand is always stochastic in competitive markets) and performance of production facilities (e.g. there is not any perfect production system in reality). A fail-safe supply network is one that mitigates the impact of variations and disruptions and provides an acceptable level of service. This is achieved by keeping connectivity in its topology against disruptions (structurally fail-safe) and coordinating the flow through the facilities against variations (operationally fail-safe). In this talk, I will show that to have a structurally fail-safe supply network, its topology should be robust against disruptions by positioning mitigation strategies and be resilient in executing these strategies. Considering “Flexibility” as a risk mitigation strategy, I answer the question “What are the best flexibility levels and flexibility speeds for facilities in structurally fail-safe supply networks?” Also, I will show that to have an operationally fail-safe supply network, its flow dynamics should be reliable against demand- and supply-side variations. In the presence of these variations, I answer the question “What is the most profitable flow dynamics throughout a supply network that is reliable against variations?” The method is verified using data from an engine maker. Findings include: i) there is a tradeoff between robustness and resilience in profit-based supply networks; ii) this tradeoff is more stable in larger supply networks with higher product supply quantities; and iii) supply networks with higher reliability in their flow planning require more flexibilities to be robust. Finally, I will touch upon possible extensions of the work into non-profit relief networks for disaster management

    Measurement in marketing

    Get PDF
    We distinguish three senses of the concept of measurement (measurement as the selection of observable indicators of theoretical concepts, measurement as the collection of data from respondents, and measurement as the formulation of measurement models linking observable indicators to latent factors representing the theoretical concepts), and we review important issues related to measurement in each of these senses. With regard to measurement in the first sense, we distinguish the steps of construct definition and item generation, and we review scale development efforts reported in three major marketing journals since 2000 to illustrate these steps and derive practical guidelines. With regard to measurement in the second sense, we look at the survey process from the respondent's perspective and discuss the goals that may guide participants' behavior during a survey, the cognitive resources that respondents devote to answering survey questions, and the problems that may occur at the various steps of the survey process. Finally, with regard to measurement in the third sense, we cover both reflective and formative measurement models, and we explain how researchers can assess the quality of measurement in both types of measurement models and how they can ascertain the comparability of measurements across different populations of respondents or conditions of measurement. We also provide a detailed empirical example of measurement analysis for reflective measurement models
    corecore