Deep learning assisted sparse array ultrasound imaging.

Abstract

This study aims to restore grating lobe artifacts and improve the image resolution of sparse array ultrasonography via a deep learning predictive model. A deep learning assisted sparse array was developed using only 64 or 16 channels out of the 128 channels in which the pitch is two or eight times the original array. The deep learning assisted sparse array imaging system was demonstrated on ex vivo porcine teeth. 64- and 16-channel sparse array images were used as the input and corresponding 128-channel dense array images were used as the ground truth. The structural similarity index measure, mean squared error, and peak signal-to-noise ratio of predicted images improved significantly (p < 0.0001). The resolution of predicted images presented close values to ground truth images (0.18 mm and 0.15 mm versus 0.15 mm). The gingival thickness measurement showed a high level of agreement between the predicted sparse array images and the ground truth images, as indicated with a bias of -0.01 mm and 0.02 mm for the 64- and 16-channel predicted images, respectively, and a Pearson's r = 0.99 (p < 0.0001) for both. The gingival thickness bias measured by deep learning assisted sparse array imaging and clinical probing needle was found to be <0.05 mm. Additionally, the deep learning model showed capability of generalization. To conclude, the deep learning assisted sparse array can reconstruct high-resolution ultrasound image using only 16 channels of 128 channels. The deep learning model performed generalization capability for the 64-channel array, while the 16-channel array generalization would require further optimization

    Similar works

    Full text

    thumbnail-image