103 research outputs found
Using vision transformer to synthesize computed tomography perfusion images in ischemic stroke patients
Computed tomography perfusion (CTP) imaging is crucial for diagnosing and determining the extent of damage in cerebral stroke patients [1]. Automatic segmentation of ischemic core and penumbra regions in CTP images is desired, given the limitations of manual examination. Self-supervised segmentation has gained attention [2], but it requires a large training set that can be obtained by synthesizing CTP images. Deep convolutional generative adversarial networks (DCGANs) have been used for this purpose [3], but high-resolution image synthesis remains a challenge. To address this, we propose to tailor the high-resolution transformer-based generative adversarial network (HiT-GAN) model, proposed by Zhao et al. [4], which utilizes vision transformers and self-attention mechanisms for the purposes of generating high-quality CTP data.
Our proposed model was trained using CTP images from 157 patients, categorized based on vessel occlusion. The dataset consisted of 70,050 raw data images, which were normalized and downsampled. Comparative evaluation with DCGAN showed that HiT-GAN achieved a significantly lower fréchet inception distance (FID) score of 77.4, compared to 143.0 for the DCGAN, indicating superior image generation performance. The generated images were visually compared with real samples, demonstrating promising results. While the current focus is on generating 2D images, future work aims to extend the model to generate 3D CTP data conditioned on labeled brain slices.
Overall, our study highlights the potential of HiT-GAN for synthesizing high-resolution CTP images, although its significance in advancing automatic segmentation techniques for ischemic stroke analysis is yet to be examined
- …