1,217 research outputs found

    Optimal Allocation of Resources in Reliability Growth

    Get PDF
    Reliability growth testing seeks to identify and remove failure modes in order to improve system reliability. This dissertation centers around the resource allocation across the components of a multi-component system to maximize system reliability. We summarize this dissertation’s contributions to optimal resource allocation in reliability growth. Chapter 2 seeks to deploy limited testing resources across the components of a series-parallel system in effort to maximize system reliability under the assumption that each component’s reliability exhibits growth according to an AMSAA model with known parameters. An optimization model for this problem is developed and then extended to consider the allocation of testing resources in a series-parallel system with consideration for the possibility of testing at different levels (system, subsystem, and component). We contribute a class of exact algorithms that decomposes the problem based upon the series-parallel structure. We prove the algorithm is finite, compare it with heuristic approaches on a set of test instances, and provide detailed analyses of numerical examples. In Chapter 3, we extend model in Chapter 2 to solve a robust optimization version of this problem in which AMSAA parameters are uncertain but assumed to lie within a budget-restricted uncertainty set. We model the problem of robust allocation of testing resources to maximize system reliability for both series and series-parallel systems, and we develop and analyze exact solution approaches for this problem based on a cutting plane algorithm. Computational results demonstrate the value of the robust optimization approach as compared to deterministic alternatives. In the last chapter, we develop a new model that merges testing components and installing redundancies within an integrated optimization model that maximizes system reliability. Specifically, our model considers a series-parallel system in which the system reliability can be improved by both testing components and installing redundant components. We contribute an exact algorithm that decomposes the problem into smaller integer linear programs. We prove that this algorithm is finite and apply it to a set of instances. Experiments demonstrate that the integrated approach generates greater reliabilities than applying test planning and redundancy allocation models iteratively, and moreover, it yields significant savings in computational time

    CFA optimizer: A new and powerful algorithm inspired by Franklin's and Coulomb's laws theory for solving the economic load dispatch problems

    Full text link
    Copyright © 2018 John Wiley & Sons, Ltd. This paper presents a new efficient algorithm inspired by Franklin's and Coulomb's laws theory that is referred to as CFA algorithm, for finding the global solutions of optimal economic load dispatch problems in power systems. CFA is based on the impact of electrically charged particles on each other due to electrical attraction and repulsion forces. The effectiveness of the CFA in different terms is tested on basic benchmark problems. Then, the quality of the CFA to achieve accurate results in different aspects is examined and proven on economic load dispatch problems including 4 different size cases, 6, 10, 15, and 110-unit test systems. Finally, the results are compared with other inspired algorithms as well as results reported in the literature. The simulation results provide evidence for the well-organized and efficient performance of the CFA algorithm in solving great diversity of nonlinear optimization problems

    A hybrid Jaya algorithm for reliability–redundancy allocation problems

    Full text link
    © 2017 Informa UK Limited, trading as Taylor & Francis Group. This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching–learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability–redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series–parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30–100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results

    Robust Assignments via Ear Decompositions and Randomized Rounding

    Get PDF
    Many real-life planning problems require making a priori decisions before all parameters of the problem have been revealed. An important special case of such problem arises in scheduling problems, where a set of tasks needs to be assigned to the available set of machines or personnel (resources), in a way that all tasks have assigned resources, and no two tasks share the same resource. In its nominal form, the resulting computational problem becomes the \emph{assignment problem} on general bipartite graphs. This paper deals with a robust variant of the assignment problem modeling situations where certain edges in the corresponding graph are \emph{vulnerable} and may become unavailable after a solution has been chosen. The goal is to choose a minimum-cost collection of edges such that if any vulnerable edge becomes unavailable, the remaining part of the solution contains an assignment of all tasks. We present approximation results and hardness proofs for this type of problems, and establish several connections to well-known concepts from matching theory, robust optimization and LP-based techniques.Comment: Full version of ICALP 2016 pape

    Management issues in systems engineering

    Get PDF
    When applied to a system, the doctrine of successive refinement is a divide-and-conquer strategy. Complex systems are sucessively divided into pieces that are less complex, until they are simple enough to be conquered. This decomposition results in several structures for describing the product system and the producing system. These structures play important roles in systems engineering and project management. Many of the remaining sections in this chapter are devoted to describing some of these key structures. Structures that describe the product system include, but are not limited to, the requirements tree, system architecture and certain symbolic information such as system drawings, schematics, and data bases. The structures that describe the producing system include the project's work breakdown, schedules, cost accounts and organization

    Essays on Optimization and Modeling Methods for Reliability and Reliability Growth

    Get PDF
    This research proposes novel solution techniques in the realm of reliability and reliability growth. We first consider a redundancy allocation problem to design a system that maximizes the reliability of a complex series-parallel system comprised of components with deterministic reliability. We propose a new meta-heuristic, inspired by the behavior of bats hunting prey, to find component allocation and redundancy levels that provide optimal or near-optimal system reliability levels. Each component alternative has an associated cost and weight and the system is constrained by cost and weight factors. We allow for component mixing within a subsystem, with a pre-defined maximum level of component redundancy per subsystem, which adds to problem complexity and prevents an optimal solution from being derived analytically. The second problem of interest involves how we model a system\u27s reliability growth as it undergoes testing and how we minimize deviation from planned growth. We propose a Grey Model, GM(1,1) for modeling reliability growth on complex systems when failure data is sparse. The GM(1,1) model\u27s performance is benchmarked with the Army Materiel Systems Analysis Activity (AMSAA) model, the standard within the reliability growth modeling community. For continuous and discrete (one-shot) testing, the GM(1,1) model shows itself to be superior to the AMSAA model when modeling reliability growth with small failure data sets. Finally, to ensure the reliability growth planning curve is followed as closely as possible, we determine the best level of corrective action to employ on a discovered failure mode, with corrective action levels allowed to vary based upon the amount of resources allocated for failure mode improvement. We propose a Markov Decision Process (MDP) approach to handle the stochasticity of failure data and its corresponding system reliability estimate. By minimizing a weighted deviation from the planning curve, systems will ideally meet the reliability milestones specified by the planning curve, while simultaneously avoiding system over-development and unnecessary resource expenditure for over-correction of failure modes

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Design Development Test and Evaluation (DDT and E) Considerations for Safe and Reliable Human Rated Spacecraft Systems

    Get PDF
    A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy

    On-Line Steady-State Data Reconciliation for Advanced Cost Analysis in the Pulp and Paper Industry

    Get PDF
    L’industrie nord-américaine des pâtes et papiers fait présentement face à plusieurs défis pour survivre dans le contexte actuel. Dans cette optique, le fait de pouvoir comprendre les marges de chacun des produits devient indispensable afin de déterminer un prix de vente optimal et de montrer la vraie rentabilité de la production. Cependant, les systèmes et les pratiques de comptabilité actuels basés sur des recettes prédéterminées ne fournissent qu’une estimation ad-hoc de ces valeurs, et ne peuvent alors seulement servir que de point de repère pour l’évaluation de la performance. Par ailleurs, l’implantation des systèmes de gestion de l’information dans les entreprises papetières a permis une meilleure compréhension de la dynamique des procédés et des affaires. Ceci a d’ailleurs permis le développement de méthodes avancées qui les assistent dans le contrôle des coûts et donc dans l’amélioration de la rentabilité. Ces systèmes sont d’une importance capitale pour les producteurs de produits de commodité tels que les usines de papier journal, où de faibles coûts de production sont essentiels pour la survie des entreprises. Cette thèse a pour objectif de développer une méthodologie permettant une analyse en ligne des coûts manufacturiers pour l’évaluation des coûts marginaux réels, et d’utiliser cette information pour la prise de décision au niveau tactique et stratégique. Cette méthode utilise des données de procédés en temps réel et de coûts provenant du système de gestion de l’information de l’usine. De plus, l’information obtenue peut être exploitée au niveau stratégique de prise de décision afin d’évaluer les impacts des coûts de procédé de diverses alternatives de projets de rétro-installation. Cette méthodologie comprend trois étapes principales. Lors de la première étape, une technique de traitement de signaux, basée sur la transformation multiéchelle d’ondelettes et leur filtrage, est appliquée afin d’analyser simultanément chaque segment du réseau d’instrumentation de l’usine. Cette étape nettoie les données de mesures du bruit à haute fréquence et des anomalies, et cherche à identifier les périodes où le procédé s’approche d’un régime permanent. La seconde étape améliore davantage la qualité des données de procédés en comparant l’ensemble des variables de l’usine à un modèle fondamental de procédé. Cette information sur les procédés est mise à jour par coaptation et correction des mesures biaisées. Troisièmement, cette information opérationnelle est intégrée aux données financières dans un modèle de coûts axé sur les opérations afin de calculer et d’analyser les coûts de production de différents régimes d’opération. Cette méthodologie a été appliquée à une étude de cas considérant une usine de papier journal et une implantation potentielle du bioraffinage en rétro-installation. Il a été constaté que le fait d’utiliser une combinaison de la technique des ondelettes avec les techniques de réconciliation de données classiques apportait plusieurs avantages au niveau de l’usine. D’abord, une implantation en ligne de cette technique a été en mesure de fournir un nombre important d’ensembles de données extraites du système de gestion de l’information de l’entreprise. Ces ensembles ont pu ensuite être utilisés pour représenter les opérations en régime permanent. Ils ont aussi fourni une base statistique suffisante pour caractériser la production selon différents régimes d’opérations. Ce faisant, la méthodologie a permis d’améliorer la qualité des données et d’identifier les endroits où l’incertitude des mesures était importantes. Pour le cas relativement simple de l’usine de papier journal, la technique a été en mesure d’obtenir plusieurs ensembles de données représentant les opérations avec un niveau de précision acceptable. Par ailleurs, la combinaison de cette implantation en ligne avec la méthode de réconciliation de données a permis d’améliorer la performance du système d’instrumentation en identifiant les appareils présentant des erreurs systématiques. De plus, il a été montré que si cette technique était implantée à l’usine pour une utilisation quotidienne, elle permettrait d’identifier les mesures erronées en temps réel, améliorant significativement l’instrumentation et le diagnostic des anomalies de procédés. Par ailleurs, l’évaluation des coûts manufacturiers des différents régimes d’opération a fourni de nouvelles façons de visualiser la structure de coûts de l’usine, permettant ainsi d’interpréter de façon transparente les coûts de procédé. À titre d’exemple, dans l’application de ce cadre méthodologique à l’étude de cas de l’usine papetière, les régimes d’opération les plus rentables et les plus coûteux pour la production d’un même grade de papier ont pu être identifiés. La caractérisation et l’interprétation des variances de coûts entre les différents régimes d’opération ont aussi permis d’identifier divers problèmes de production. L’application de cette méthodologie pour l’évaluation des coûts manufacturiers de scenarios de rétro-installation a montré la capacité de cette méthodologie pour utiliser les informations opérationnelles basées sur les régimes afin d’améliorer la prise de décision au niveau stratégique de l’usine. Par exemple, il a été montré que la rentabilité opérationnelle de nouvelles lignes de production intégrées dépend fortement de chacun des régimes d’opérations actuels de l’usine papetière. Les différences entre chacun de ces régimes peuvent ainsi être visible d’une perspective de procédé et permettent l’évaluation des marges des futurs produits et combinaisons de produits. Entre autres, ces informations sur les différents régimes d’opérations permettraient d’améliorer la rentabilité de futures bioraffineries en fournissant l’information nécessaire pour utiliser de façon optimale la flexibilité des procédés selon les conditions de marché. Les travaux futurs comprennent l’élargissement de ce travail dans un cadre méthodologique pour l’aide à la prise de décision pour d’investissements stratégiques au niveau corporatif. De plus, une analyse des coûts marginaux basée sur les données réelles et sur une analyse de la performance des opérations pourrait être ajoutée à cette méthodologie afin d’analyser différentes options de procédés de bioraffinage forestier implantés en rétro-installation et différents niveaux de flexibilité. -------- The North American pulp and paper industry currently faces many challenges. Due to its commodity-focused and capital intensive nature, the industry experiences increasing difficulty to survive in the current global market place. Knowing individual product margins becomes essential to determine the optimal unit prices, thus uncovering the real operating profitability of manufacturing. However, current cost accounting systems that are based on predetermined resource spending can provide only ad-hoc assessment of these values, thus serve only as a mill benchmark for cost performance evaluation. The implementation of information management systems in pulp and paper companies has enabled a better understanding of both business and production processes. This allow for the development of advanced methodologies that assist the forestry sector in better controlling costs and improving profits. These systems are of especially critical importance for commodity producers such as newsprint mills, where low production costs are essential for business survivability. The objective of this thesis was to develop a methodology for online manufacturing cost analysis using real-time process and cost data available from the information management systems that would be capable to assess actual product margin costs, and use this information for operational and tactical decision-making. Furthermore, the knowledge from applying this methodology can be explored in the strategic decision-making level for addressing the process-cost impact of retrofit design alternatives. This methodology consists of three major steps. First, a signal processing technique, based on multiscale wavelet transformation and filtering, is applied to simultaneously analyze every segment of the plant-wide instrumentation network. A second step further improves process data quality by confronting the plant-wide set of variables to the underlying fundamental process model using data reconciliation techniques. The plant-wide manufacturing information is updated by coaptation and correction of biased measurements. Third, this operational knowledge is integrated with the financial data in an operations-driven cost model to calculate and analyze production costs of operating regimes for short and long term benefits of the company. The methodology is applied to a case study considering current newsprint mill characteristics and potential retrofit biorefinery implementation. It was found that using a combination of the wavelet technique with classical data reconciliation technique provides multiple facility-level benefits. An online implementation of this technique was able to provide a significant number of data sets that were extracted from the information management systems as potential candidates to represent plant-wide and near steady-state operation. These data sets have provided sufficient statistical basis for characterising manufacturing operation per different operating regime. By doing this automatically, the methodology was able to enhance the quality of data and highlight the manufacturing region where the uncertainty in measurements is significant. The number of near steady-state candidates that can be detected was increased, when state identification parameters were being relaxed. However, it was shown that the uncertainty in the resulting data sets is increasing with relaxing the steady state assumption. In the analyzed rather simple newsprint operation, the technique was able to acquire multiple near steady-state data sets representing plant-wide operation with satisfactory level of accuracy. Moreover, the on-line implementation in combination with data reconciliation method, helped to improve measurement sensor performances by identifying sensors with systematic errors. This valuable information was then used to tune individual instruments further, and hence improve the overall performance of the methodology. Furthermore, it was shown that if this technique is implemented at the facility level for everyday use, it would help identify biased measurements in real-time and thus improve instrumentation and process troubleshooting significantly. The manufacturing cost assessment based on these data sets that represent individual operating regime, has provided a new insights into the cost structure of the facility with transparent and visible process-cost interpretation capabilities. The application of the overall methodological framework, in the context of real production processes, has proved the ability to identify favourable and costly operating regimes when producing the same product grade. The characterisation and interpretation of the cost variances between individual regimes as well as within the same operating regime helped to identify process problems. Further application of the methodology for evaluating manufacturing costs of retrofit design scenarios have shown the ability to exploit the current operations-driven manufacturing knowledge based on regimes to enhance strategic decision-making at the facility. The results from this application showed that the operational profitability of new integrated production lines strongly depends on the operational differences in current manufacturing regimes of core business products. These differences in manufacturing costs can be visible from a process perspective and enable assessment of individual future product and mix-product margins. This information is essential for margin-centric supply chain planning of the enterprise and for exploring process flexibility to achieve an optimal production profile according to market conditions. Future work includes the expansion of this work into strategic investment decision-making at the corporate level in order to enhance tactical and strategic planning. Furthermore, marginal cost analysis based on real-data and operations-performance analysis could be included in the methodological framework in order to obtain more flexible forest biorefinery retrofit designs with good strategic fit
    • …
    corecore