4 research outputs found

    A contribution to the evaluation and optimization of networks reliability

    Get PDF
    L’évaluation de la fiabilité des réseaux est un problème combinatoire très complexe qui nécessite des moyens de calcul très puissants. Plusieurs méthodes ont été proposées dans la littérature pour apporter des solutions. Certaines ont été programmées dont notamment les méthodes d’énumération des ensembles minimaux et la factorisation, et d’autres sont restées à l’état de simples théories. Cette thèse traite le cas de l’évaluation et l’optimisation de la fiabilité des réseaux. Plusieurs problèmes ont été abordés dont notamment la mise au point d’une méthodologie pour la modélisation des réseaux en vue de l’évaluation de leur fiabilités. Cette méthodologie a été validée dans le cadre d’un réseau de radio communication étendu implanté récemment pour couvrir les besoins de toute la province québécoise. Plusieurs algorithmes ont aussi été établis pour générer les chemins et les coupes minimales pour un réseau donné. La génération des chemins et des coupes constitue une contribution importante dans le processus d’évaluation et d’optimisation de la fiabilité. Ces algorithmes ont permis de traiter de manière rapide et efficace plusieurs réseaux tests ainsi que le réseau de radio communication provincial. Ils ont été par la suite exploités pour évaluer la fiabilité grâce à une méthode basée sur les diagrammes de décision binaire. Plusieurs contributions théoriques ont aussi permis de mettre en place une solution exacte de la fiabilité des réseaux stochastiques imparfaits dans le cadre des méthodes de factorisation. A partir de cette recherche plusieurs outils ont été programmés pour évaluer et optimiser la fiabilité des réseaux. Les résultats obtenus montrent clairement un gain significatif en temps d’exécution et en espace de mémoire utilisé par rapport à beaucoup d’autres implémentations. Mots-clés: Fiabilité, réseaux, optimisation, diagrammes de décision binaire, ensembles des chemins et coupes minimales, algorithmes, indicateur de Birnbaum, systèmes de radio télécommunication, programmes.Efficient computation of systems reliability is required in many sensitive networks. Despite the increased efficiency of computers and the proliferation of algorithms, the problem of finding good and quickly solutions in the case of large systems remains open. Recently, efficient computation techniques have been recognized as significant advances to solve the problem during a reasonable period of time. However, they are applicable to a special category of networks and more efforts still necessary to generalize a unified method giving exact solution. Assessing the reliability of networks is a very complex combinatorial problem which requires powerful computing resources. Several methods have been proposed in the literature. Some have been implemented including minimal sets enumeration and factoring methods, and others remained as simple theories. This thesis treats the case of networks reliability evaluation and optimization. Several issues were discussed including the development of a methodology for modeling networks and evaluating their reliabilities. This methodology was validated as part of a radio communication network project. In this work, some algorithms have been developed to generate minimal paths and cuts for a given network. The generation of paths and cuts is an important contribution in the process of networks reliability and optimization. These algorithms have been subsequently used to assess reliability by a method based on binary decision diagrams. Several theoretical contributions have been proposed and helped to establish an exact solution of the stochastic networks reliability in which edges and nodes are subject to failure using factoring decomposition theorem. From this research activity, several tools have been implemented and results clearly show a significant gain in time execution and memory space used by comparison to many other implementations. Key-words: Reliability, Networks, optimization, binary decision diagrams, minimal paths set and cuts set, algorithms, Birnbaum performance index, Networks, radio-telecommunication systems, programs

    Combinatorial and graph theoretical aspects of two-edge connected reliability

    Get PDF
    Die Untersuchung von Zuverlässigkeitsnetzwerken geht bis zum frühen 20. Jahrhundert zurück. Diese Arbeit beschäftigt sich hauptsächlich mit der Zweifach-Kantenzusammenhangswahrscheinlichkeit. Zuerst werden einfache Algorithmen, die aber für allgemeine Graphen nicht effizient sind, gezeigt, zusammen mit Reduktionen. Weiterhin werden Charakterisierungen von Kanten bezogen auf Wegemengen gezeigt. Neue strukturelle Bedingungen für diese werden vorgestellt. Neue Ergebnisse liegen ebenfalls für Graphen hoher Dichte und Symmetrie vor, genauer für vollständige und vollständig bipartite Graphen. Naturgemäß sind Graphen von geringer Dichte hier einfacher in der Untersuchung. Die Arbeit zeigt Ergebnisse für Kreise, Räder und Leiterstrukturen. Graphen mit beschränkter Weg- beziehungsweise Baumweite haben polynomiale Algorithmen und in Spezialfällen einfache Formeln, die ebenfalls vorgestellt werden. Der abschließende Teil beschäftigt sich mit Schranken und Approximationen

    Reliability Analysis of the Hypercube Architecture.

    Get PDF
    This dissertation presents improved techniques for analyzing network-connected (NCF), 2-connected (2CF), task-based (TBF), and subcube (SF) functionality measures in a hypercube multiprocessor with faulty processing elements (PE) and/or communication elements (CE). These measures help study system-level fault tolerance issues and relate to various application modes in the hypercube. Solutions discussed in the text fall into probabilistic and deterministic models. The probabilistic measure assumes a stochastic graph of the hypercube where PE\u27s and/or CE\u27s may fail with certain probabilities, while the deterministic model considers that some system components are already failed and aims to determine the system functionality. For probabilistic model, MIL-HDBK-217F is used to predict PE and CE failure rates for an Intel iPSC system. First, a technique called CAREL is presented. A proof of its correctness is included in an appendix. Using the shelling ordering concept, CAREL is shown to solve the exact probabilistic NCF measure for a hypercube in time polynomial in the number of spanning trees. However, this number increases exponentially in the hypercube dimension. This dissertation, then, aims to more efficiently obtain lower and upper bounds on the measures. Algorithms, presented in the text, generate tighter bounds than had been obtained previously and run in time polynomial in the cube dimension. The proposed algorithms for probabilistic 2CF measure consider PE and/or CE failures. In attempting to evaluate deterministic measures, a hybrid method for fault tolerant broadcasting in the hypercube is proposed. This method combines the favorable features of redundant and non-redundant techniques. A generalized result on the deterministic TBF measure for the hypercube is then described. Two distributed algorithms are proposed to identify the largest operational subcubes in a hypercube C\sb{n} with faulty PE\u27s. Method 1, called LOS1, requires a list of faulty components and utilizes the CMB operator of CAREL to solve the problem. In case the number of unavailable nodes (faulty or busy) increases, an alternative distributed approach, called LOS2, processes m available nodes in O(mn) time. The proposed techniques are simple and efficient

    Building safety into the conceptual design of complex systems. An aircraft systems perspective.

    Get PDF
    Safety is a critical consideration during the design of an aircraft, as it constrains how primary functions of the system can be achieved. It is essential to include safety considerations from early design stages to avoid low-performance solutions or high costs associated with the substantial redesign that is commonly required when the system is found not to be safe at late stages of the design. Additionally, safety is a crucial element in the certification process of aircraft, which requires compliance with safety requirements to be demonstrated. Existing methods for safety assessment are limited in their ability to inform architectural decisions from early design stages. Current techniques often require large amounts of manual work and are not well integrated with other system engineering tools, which translates into increased time to synthesise and analyse architectures, thus reducing the number of alternative architectures that can be studied. This lack of timely safety assessment also results in a situation where safety models evolve at a different pace and become outdated with respect to the architecture definition, which limits their ability to provide valuable feedback. Within this context, the aim is to improve the efficiency and effectiveness of design for safety as an integral part of the systems architecting process. Three objectives are proposed to achieve the stated aim: automate and integrate the hazard assessment process with the systems architecting process; facilitate the interactive introduction of safety principles; and enable a faster assessment of safety and performance of architectures. The scope is restricted to the earlier (conceptual) design stages, the use of model-based systems engineering for systems architecting (RFLP paradigm) and steady-state models for rapid analysis. Regarding the first objective, an enabler to support the generation of safety requirements through hazard assessment was created. The enabler integrates the RFLP architecting process with the System-Theoretic Process Analysis to ensure consistency of the safety assessment and derived safety requirements more efficiently. Concerning the second objective, interactive enablers were developed to support the designer when synthesizing architectures featuring a combination of safety principles such as physical redundancy, functional redundancy, and containment. To ensure consistency and reduce the required amount of work for adding safety, these methods leverage the ability to trace dependencies within the logical view and between the RFLP domains of the architecture. As required by the third objective, methods were developed to automate substantial parts of the creation process of analysis models. In particular, the methods enable rapid obtention of models for Fault Tree Analysis and subsystem sizing considering advanced contextual information such as mission, environment, and system configurations. To evaluate this research, the methods were implemented into AirCADia Architect, an object-oriented architecting tool. The methods were verified and evaluated through their applications to two aircraft-related use cases. The first use case involves the wheel brake systems and the second one involves several subsystems. The results of this study were presented to a group of design specialists from a major airframe manufacturer for evaluation. The experts concluded that the proposed framework allows architects to define and analyse safe architectures faster, thus enabling a more effective and efficient design space exploration during conceptual design.PhD in Aerospac
    corecore