23 research outputs found

    Formal Verification of Input-Output Mappings of Tree Ensembles

    Full text link
    Recent advances in machine learning and artificial intelligence are now being considered in safety-critical autonomous systems where software defects may cause severe harm to humans and the environment. Design organizations in these domains are currently unable to provide convincing arguments that their systems are safe to operate when machine learning algorithms are used to implement their software. In this paper, we present an efficient method to extract equivalence classes from decision trees and tree ensembles, and to formally verify that their input-output mappings comply with requirements. The idea is that, given that safety requirements can be traced to desirable properties on system input-output patterns, we can use positive verification outcomes in safety arguments. This paper presents the implementation of the method in the tool VoTE (Verifier of Tree Ensembles), and evaluates its scalability on two case studies presented in current literature. We demonstrate that our method is practical for tree ensembles trained on low-dimensional data with up to 25 decision trees and tree depths of up to 20. Our work also studies the limitations of the method with high-dimensional data and preliminarily investigates the trade-off between large number of trees and time taken for verification

    Artificial Intelligence for Natural Hazards Risk Analysis: Potential, Challenges, and Research Needs

    Full text link
    Artificial intelligence (AI) methods have seen increasingly widespread use in everything from consumer products and driverless cars to fraud detection and weather forecasting. The use of AI has transformed many of these application domains. There are ongoing efforts at leveraging AI for disaster risk analysis. This article takes a critical look at the use of AI for disaster risk analysis. What is the potential? How is the use of AI in this field different from its use in nondisaster fields? What challenges need to be overcome for this potential to be realized? And, what are the potential pitfalls of an AI‐based approach for disaster risk analysis that we as a society must be cautious of?Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155885/1/risa13476_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155885/2/risa13476.pd

    Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies

    Full text link
    Deep learning has become an increasingly common technique for various control problems, such as robotic arm manipulation, robot navigation, and autonomous vehicles. However, the downside of using deep neural networks to learn control policies is their opaque nature and the difficulties of validating their safety. As the networks used to obtain state-of-the-art results become increasingly deep and complex, the rules they have learned and how they operate become more challenging to understand. This presents an issue, since in safety-critical applications the safety of the control policy must be ensured to a high confidence level. In this paper, we propose an automated black box testing framework based on adversarial reinforcement learning. The technique uses an adversarial agent, whose goal is to degrade the performance of the target model under test. We test the approach on an autonomous vehicle problem, by training an adversarial reinforcement learning agent, which aims to cause a deep neural network-driven autonomous vehicle to collide. Two neural networks trained for autonomous driving are compared, and the results from the testing are used to compare the robustness of their learned control policies. We show that the proposed framework is able to find weaknesses in both control policies that were not evident during online testing and therefore, demonstrate a significant benefit over manual testing methods.Comment: 2020 IEEE International Conference on Robotics and Automation (ICRA

    Estimating uncertainty of earthquake rupture using Bayesian neural network

    Full text link
    Bayesian neural networks (BNN) are the probabilistic model that combines the strengths of both neural network (NN) and stochastic processes. As a result, BNN can combat overfitting and perform well in applications where data is limited. Earthquake rupture study is such a problem where data is insufficient, and scientists have to rely on many trial and error numerical or physical models. Lack of resources and computational expenses, often, it becomes hard to determine the reasons behind the earthquake rupture. In this work, a BNN has been used (1) to combat the small data problem and (2) to find out the parameter combinations responsible for earthquake rupture and (3) to estimate the uncertainty associated with earthquake rupture. Two thousand rupture simulations are used to train and test the model. A simple 2D rupture geometry is considered where the fault has a Gaussian geometric heterogeneity at the center, and eight parameters vary in each simulation. The test F1-score of BNN (0.8334), which is 2.34% higher than plain NN score. Results show that the parameters of rupture propagation have higher uncertainty than the rupture arrest. Normal stresses play a vital role in determining rupture propagation and are also the highest source of uncertainty, followed by the dynamic friction coefficient. Shear stress has a moderate role, whereas the geometric features such as the width and height of the fault are least significant and uncertain
    corecore