Contour Extraction of Inertial Confinement Fusion Images By Data Augmentation

Abstract

X-Ray radiographs are one of the primary results from inertial confinement fusion (ICF) experiments. Issues such as scarcity of experimental data, high levels of noise in the data, lack of ground truth data, and low resolution of data limit the use of machine/deep learning for automated analysis of radiographs. In this work we combat these roadblocks to the use of machine learning by creating a synthetic radiograph dataset resembling experimental radiographs. Accompanying each synthetic radiograph are corresponding contours of each capsule shell shape, which enables neural networks to train on the synthetic data for contour extraction and be applied to the experimental images. Thus, we train an instance of the convolutional neural network U-Net to segment the shape of the outer shell capsule using the synthetic dataset, and we apply this instance of U-Net to a set of radiographs taken at the National Ignition Facility. We show that the network extracted the outer shell shape of a small number of capsules as an initial demonstration of deep learning for automatic contour extraction of ICF images. Future work may include extracting outer shells from all of the dataset, applying different kinds of neural networks, and extraction of inner shell contours as well.Comment: 6 pages, 9 figure

    Similar works

    Full text

    thumbnail-image

    Available Versions