40 research outputs found

    PHE-SICH-CT-IDS: A Benchmark CT Image Dataset for Evaluation Semantic Segmentation, Object Detection and Radiomic Feature Extraction of Perihematomal Edema in Spontaneous Intracerebral Hemorrhage

    Full text link
    Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets. This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods. To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at: https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937

    ECPC-IDS:A benchmark endometrail cancer PET/CT image dataset for evaluation of semantic segmentation and detection of hypermetabolic regions

    Full text link
    Endometrial cancer is one of the most common tumors in the female reproductive system and is the third most common gynecological malignancy that causes death after ovarian and cervical cancer. Early diagnosis can significantly improve the 5-year survival rate of patients. With the development of artificial intelligence, computer-assisted diagnosis plays an increasingly important role in improving the accuracy and objectivity of diagnosis, as well as reducing the workload of doctors. However, the absence of publicly available endometrial cancer image datasets restricts the application of computer-assisted diagnostic techniques.In this paper, a publicly available Endometrial Cancer PET/CT Image Dataset for Evaluation of Semantic Segmentation and Detection of Hypermetabolic Regions (ECPC-IDS) are published. Specifically, the segmentation section includes PET and CT images, with a total of 7159 images in multiple formats. In order to prove the effectiveness of segmentation methods on ECPC-IDS, five classical deep learning semantic segmentation methods are selected to test the image segmentation task. The object detection section also includes PET and CT images, with a total of 3579 images and XML files with annotation information. Six deep learning methods are selected for experiments on the detection task.This study conduct extensive experiments using deep learning-based semantic segmentation and object detection methods to demonstrate the differences between various methods on ECPC-IDS. As far as we know, this is the first publicly available dataset of endometrial cancer with a large number of multiple images, including a large amount of information required for image and target detection. ECPC-IDS can aid researchers in exploring new algorithms to enhance computer-assisted technology, benefiting both clinical doctors and patients greatly.Comment: 14 pages,6 figure

    AATCT-IDS: A Benchmark Abdominal Adipose Tissue CT Image Dataset for Image Denoising, Semantic Segmentation, and Radiomics Evaluation

    Full text link
    Methods: In this study, a benchmark \emph{Abdominal Adipose Tissue CT Image Dataset} (AATTCT-IDS) containing 300 subjects is prepared and published. AATTCT-IDS publics 13,732 raw CT slices, and the researchers individually annotate the subcutaneous and visceral adipose tissue regions of 3,213 of those slices that have the same slice distance to validate denoising methods, train semantic segmentation models, and study radiomics. For different tasks, this paper compares and analyzes the performance of various methods on AATTCT-IDS by combining the visualization results and evaluation data. Thus, verify the research potential of this data set in the above three types of tasks. Results: In the comparative study of image denoising, algorithms using a smoothing strategy suppress mixed noise at the expense of image details and obtain better evaluation data. Methods such as BM3D preserve the original image structure better, although the evaluation data are slightly lower. The results show significant differences among them. In the comparative study of semantic segmentation of abdominal adipose tissue, the segmentation results of adipose tissue by each model show different structural characteristics. Among them, BiSeNet obtains segmentation results only slightly inferior to U-Net with the shortest training time and effectively separates small and isolated adipose tissue. In addition, the radiomics study based on AATTCT-IDS reveals three adipose distributions in the subject population. Conclusion: AATTCT-IDS contains the ground truth of adipose tissue regions in abdominal CT slices. This open-source dataset can attract researchers to explore the multi-dimensional characteristics of abdominal adipose tissue and thus help physicians and patients in clinical practice. AATCT-IDS is freely published for non-commercial purpose at: \url{https://figshare.com/articles/dataset/AATTCT-IDS/23807256}.Comment: 17 pages, 7 figure

    A non-enhanced CT-based deep learning diagnostic system for COVID-19 infection at high risk among lung cancer patients

    Get PDF
    BackgroundPneumonia and lung cancer have a mutually reinforcing relationship. Lung cancer patients are prone to contracting COVID-19, with poorer prognoses. Additionally, COVID-19 infection can impact anticancer treatments for lung cancer patients. Developing an early diagnostic system for COVID-19 pneumonia can help improve the prognosis of lung cancer patients with COVID-19 infection.MethodThis study proposes a neural network for COVID-19 diagnosis based on non-enhanced CT scans, consisting of two 3D convolutional neural networks (CNN) connected in series to form two diagnostic modules. The first diagnostic module classifies COVID-19 pneumonia patients from other pneumonia patients, while the second diagnostic module distinguishes severe COVID-19 patients from ordinary COVID-19 patients. We also analyzed the correlation between the deep learning features of the two diagnostic modules and various laboratory parameters, including KL-6.ResultThe first diagnostic module achieved an accuracy of 0.9669 on the training set and 0.8884 on the test set, while the second diagnostic module achieved an accuracy of 0.9722 on the training set and 0.9184 on the test set. Strong correlation was observed between the deep learning parameters of the second diagnostic module and KL-6.ConclusionOur neural network can differentiate between COVID-19 pneumonia and other pneumonias on CT images, while also distinguishing between ordinary COVID-19 patients and those with white lung. Patients with white lung in COVID-19 have greater alveolar damage compared to ordinary COVID-19 patients, and our deep learning features can serve as an imaging biomarker

    Spatio-temporal segmentation for video surveillance

    No full text

    Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Get PDF
    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection

    Block2vec: An Approach for Identifying Urban Functional Regions by Integrating Sentence Embedding Model and Points of Interest

    No full text
    Urban functional regions are essential information in parsing urban spatial structure. The rapid and accurate identification of urban functional regions is important for improving urban planning and management. Thanks to its low cost and fast data update characteristics, the Point of Interest (POI) is one of the most common types of open access data. It mainly identifies urban functional regions by analyzing the potential correlation between POI data and the regions. Even though this is an important manifestation of the functional region, the spatial correlation between regions is rarely considered in previous studies. In order to extract the spatial semantic information among regions, a new model, called the Block2vec, is proposed by using the idea of the Skip-gram framework. The Block2vec model maps the spatial correlation between the POIs, as well as the regions, to a high-dimensional vector, in which classification of urban functional regions can be better performed. The results from cluster analysis showed that the high-dimensional vector extracted can well distinguish the regions with different functions. The random forests classification result (Overall accuracy = 0.7186, Kappa = 0.6429) illustrated the effectiveness of the proposed method. This study also verified the potential of the sentence embedding model in the semantic information extraction of POIs
    corecore