5 research outputs found

    Evaluación de las causas de falla de un dique usando redes bayesianas

    Get PDF
    Context: Forensic geotechnical engineering aims to determine the most likely causes leading to geotechnical failures. Standard practice tests a set of credible hypotheses against the collected evidence using backward analysis and complex but deterministic geotechnical models. Geotechnical models involving uncertainty are not usually employed to analyze the causes of failure, even though soil parameters are uncertain, and evidence is often incomplete. Method: This paper introduces a probabilistic model approach based on Bayesian Networks to test hypotheses in light of collected evidence. Bayesian networks simulate patterns of human reasoning under uncertainty through a bidirectional inference process known as “explaining away.” In this study, Bayesian Networks are used to test several credible hypotheses about the causes of levee failures. Probability queries and the K-Most Probable Explanation algorithm (K-MPE) are used to assess the hypotheses. Results: This approach was applied to the analysis of a well-known levee failure in Breitenhagen, Germany, where previous forensic studies found a multiplicity of competing explanations for the causes of failure. The approach allows concluding that the failure was most likely caused by a combination of high phreatic levels, a conductive layer, and weak soils, thus allowing to discard a significant number of competing explanations. Conclusions: The proposed approach is expected to improve the accuracy and transparency of conclusions about the causes of failure in levee structures.Contexto: La ingeniería geotécnica forense tiene como objetivo determinar las causas más probables que conducen a fallas de tipo geotécnico. La práctica habitual pone a prueba un conjunto de hipótesis a la luz de la evidencia, utilizando análisis retrospectivos y modelos geotécnicos complejos pero deterministas. Los modelos geotécnicos que involucran incertidumbre no suelen emplearse para analizar las causas de falla, a pesar de que los parámetros del suelo son inciertos y la evidencia suele ser incompleta. Método: Este artículo presenta un enfoque de modelo probabilístico basado en redes bayesianas para evaluar hipótesis con base en la evidencia recolectada. Las redes bayesianas simulan patrones de razonamiento humano bajo incertidumbre a través de un proceso de inferencia bidireccional conocido como explaining away [explicación]. En este estudio, las redes bayesianas se utilizan para probar hipótesis creíbles sobre las causas de falla de un dique. Para evaluar las hipótesis se utilizan consultas de probabilidad y el algoritmo de explicación más probable (K-MPE). Resultados: El enfoque se empleó en el análisis de un dique en Breitenhagen, Alemania, donde varios estudios forenses anteriores encontraron multiplicidad de explicaciones contrapuestas acerca de las causas de falla. El enfoque permite concluir que la causa más probable de falla fue una combinación de altos niveles freáticos, una capa de suelo de alta permeabilidad y suelos de baja resistencia, lo que permitió descartar un número significativo de explicaciones contrapuestas. Conclusiones: Se espera que el enfoque probabilístico propuesto mejore la precisión y la transparencia de las conclusiones sobre las causas de falla en estructuras tipo dique

    Developing a computational framework for explanation generation in knowledge-based systems and its application in automated feature recognition

    Get PDF
    A Knowledge-Based System (KBS) is essentially an intelligent computer system which explicitly or tacitly possesses a knowledge repository that helps the system solve problems. Researches focusing on building KBSs for industrial applications to improve design quality and shorten research cycle are increasingly attracting interests. For the early models, explanability is considered as one of the major benefits of using KBSs since that most of them are generally rule-based systems and the explanation can be generated based on the rule traces of the reasoning behaviors. With the development of KBS, the definition of knowledge base is becoming much more general than just using rules, and the techniques used to solve problems in KBS are far more than just rule-based reasoning. Many Artificial Intelligence (AI) techniques are introduced, such as neural network, genetic algorithm, etc. The effectiveness and efficiency of KBS are thus improved. However, as a trade-off, the explanability of KBS is weakened. More and more KBSs are conceived as black-box systems that do not run transparently to users, resulting in loss of trusts for the KBSs. Developing an explanation model for modern KBSs has a positive impact on user acceptance of the KBSs and the advices they provided. This thesis proposes a novel computational framework for explanation generation in KBS. Different with existing models which are usually built inside a KBS and generate explanations based on the actual decision making process, the explanation model in our framework stands outside the KBS and attempts to generate explanations through the production of an alternative justification that is unrelated to the actual decision making process used by the system. In this case, the knowledge and reasoning approaches in the explanation model can be optimized specially for explanation generation. The quality of explanation is thus improved. Another contribution in this study is that the system aims to cover three types of explanations (where most of the existing models only focus on the first two): 1) decision explanation, which helps users understand how a KBS reached its conclusion; 2) domain explanation, which provides detailed descriptions of the concepts and relationships within the domain; 3) software diagnostic, which diagnoses user observations of unexpected behaviors of the system or some relevant domain phenomena. The framework is demonstrated with a case of Automated Feature Recognition (AFR). The resulting explanatory system uses Semantic Web languages to implement an individual knowledge base only for explanatory purpose, and integrates a novel reasoning approach for generating explanations. The system is tested with an industrial STEP file, and delivers good quality explanations for user queries about how a certain feature is recognized

    Simplifying Explanations in Bayesian Belief Networks

    No full text
    Abductive inference in Bayesian belief networks is intended as the process of generating the K most probable configurations given an observed evidence. These configurations are called explanations and in most of the approaches found in the literature, all the explanations have the same number of literals. In this paper we study how to simplify the explanations in such a way that the resulting configurations are still accounting for the observed facts
    corecore