Deep learning for safety assessment of nuclear power reactors: Reliability, explainability, and research opportunities

Abstract

Deep learning algorithms provide plausible benefits for efficient prediction and analysis of nuclear reactor safety phenomena. However, research works that discuss the critical challenges with deep learning models from the reactor safety perspective are limited. This article presents the state-of-the-art in deep learning application in nuclear reactor safety analysis, and the inherent limitations in deep learning models. In addition, critical issues such as deep learning model explainability, sensitivity and uncertainty constraints, model reliability, and trustworthiness are discussed from the nuclear safety perspective, and robust solutions to the identified issues are also presented. As a major contribution, a deep feedforward neural network is developed as a surrogate model to predict turbulent eddy viscosity in Reynolds-averaged Navier–Stokes (RANS) simulation. Further, the deep feedforward neural network performance is compared with the conventional Spalart Allmaras closure model in the RANS turbulence closure simulation. In addition, the Shapely Additive Explanation (SHAP) and the local interpretable model-agnostic explanations (LIME) APIs are introduced to explain the deep feedforward neural network predictions. Finally, exciting research opportunities to optimize deep learning-based reactor safety analysis are presented.The work of AA and HA are funded through the Sêr Cymru II 80761-BU-103 project by Welsh European Funding Office (WEFO) under the European Development Fund (ERDF)

    Similar works