580,159 research outputs found

    FetusMapV2: Enhanced Fetal Pose Estimation in 3D Ultrasound

    Full text link
    Fetal pose estimation in 3D ultrasound (US) involves identifying a set of associated fetal anatomical landmarks. Its primary objective is to provide comprehensive information about the fetus through landmark connections, thus benefiting various critical applications, such as biometric measurements, plane localization, and fetal movement monitoring. However, accurately estimating the 3D fetal pose in US volume has several challenges, including poor image quality, limited GPU memory for tackling high dimensional data, symmetrical or ambiguous anatomical structures, and considerable variations in fetal poses. In this study, we propose a novel 3D fetal pose estimation framework (called FetusMapV2) to overcome the above challenges. Our contribution is three-fold. First, we propose a heuristic scheme that explores the complementary network structure-unconstrained and activation-unreserved GPU memory management approaches, which can enlarge the input image resolution for better results under limited GPU memory. Second, we design a novel Pair Loss to mitigate confusion caused by symmetrical and similar anatomical structures. It separates the hidden classification task from the landmark localization task and thus progressively eases model learning. Last, we propose a shape priors-based self-supervised learning by selecting the relatively stable landmarks to refine the pose online. Extensive experiments and diverse applications on a large-scale fetal US dataset including 1000 volumes with 22 landmarks per volume demonstrate that our method outperforms other strong competitors.Comment: 16 pages, 11 figures, accepted by Medical Image Analysis(2023

    AI deployment on GBM diagnosis: a novel approach to analyze histopathological images using image feature-based analysis

    Get PDF
    Background: Glioblastoma (GBM) is one of the most common malignant primary brain tumors, which accounts for 60–70% of all gliomas. Conventional diagnosis and the decision of post-operation treatment plan for glioblastoma is mainly based on the feature-based qualitative analysis of hematoxylin and eosin-stained (H&amp;E) histopathological slides by both an experienced medical technologist and a pathologist. The recent development of digital whole slide scanners makes AI-based histopathological image analysis feasible and helps to diagnose cancer by accurately counting cell types and/or quantitative analysis. However, the technology available for digital slide image analysis is still very limited. This study aimed to build an image feature-based computer model using histopathology whole slide images to differentiate patients with glioblastoma (GBM) from healthy control (HC). Method: Two independent cohorts of patients were used. The first cohort was composed of 262 GBM patients of the Cancer Genome Atlas Glioblastoma Multiform Collection (TCGA-GBM) dataset from the cancer imaging archive (TCIA) database. The second cohort was composed of 60 GBM patients collected from a local hospital. Also, a group of 60 participants with no known brain disease were collected. All the H&amp;E slides were collected. Thirty-three image features (22 GLCM and 11 GLRLM) were retrieved from the tumor volume delineated by medical technologist on H&amp;E slides. Five machine-learning algorithms including decision-tree (DT), extreme-boost (EB), support vector machine (SVM), random forest (RF), and linear model (LM) were used to build five models using the image features extracted from the first cohort of patients. Models built were deployed using the selected key image features for GBM diagnosis from the second cohort (local patients) as model testing, to identify and verify key image features for GBM diagnosis. Results: All five machine learning algorithms demonstrated excellent performance in GBM diagnosis and achieved an overall accuracy of 100% in the training and validation stage. A total of 12 GLCM and 3 GLRLM image features were identified and they showed a significant difference between the normal and the GBM image. However, only the SVM model maintained its excellent performance in the deployment of the models using the independent local cohort, with an accuracy of 93.5%, sensitivity of 86.95%, and specificity of 99.73%. Conclusion: In this study, we have identified 12 GLCM and 3 GLRLM image features which can aid the GBM diagnosis. Among the five models built, the SVM model proposed in this study demonstrated excellent accuracy with very good sensitivity and specificity. It could potentially be used for GBM diagnosis and future clinical application.</p

    MRI noise estimation and denoising using non-local PCA

    Get PDF
    NOTICE: this is the author’s version of a work that was accepted for publication in Medical Image AnalysisChanges resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Medical Image Analysis, [Volume 22, Issue 1, May 2015, Pages 35–47] DOI 10.1016/j.media.2015.01.004This paper proposes a novel method for MRI denoising that exploits both the sparseness and self-similarity properties of the MR images. The proposed method is a two-stage approach that first filters the noisy image using a non local PCA thresholding strategy by automatically estimating the local noise level present in the image and second uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed method internally estimates the amount of local noise presents in the images that enables applying it automatically to images with spatially varying noise levels and also corrects the Rician noise induced bias locally. The proposed approach has been compared with related state-of-the-art methods showing competitive results in all the studied cases.We are grateful to Dr. Matteo Mangioni and Dr. Alessandro Foi for their help on running their BM4D method in our comparisons. We want also to thank Dr. Luis Marti-Bonmati and Dr. Angel Alberich-Bayarri from Quiron Hospital of Valencia for providing the real clinical data used in this paper. This study has been carried out with financial support from the French State, managed by the French National Research Agency (ANR) in the frame of the Investments for the future Programme IdEx Bordeaux (ANR-10-IDEX-03-02), Cluster of excellence CPU and TRAIL (HR-DTI ANR-10-LABX-57).Manjón Herrera, JV.; Coupé, P.; Buades, A. (2015). MRI noise estimation and denoising using non-local PCA. Medical Image Analysis. 22(1):35-47. doi:10.1016/j.media.2015.01.004S354722

    3D medical volume segmentation using hybrid multiresolution statistical approaches

    Get PDF
    This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations

    Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation

    Get PDF
    Copyright @ 2011 Shadi AlZubi et al. This article has been made available through the Brunel Open Access Publishing Fund.The experimental study presented in this paper is aimed at the development of an automatic image segmentation system for classifying region of interest (ROI) in medical images which are obtained from different medical scanners such as PET, CT, or MRI. Multiresolution analysis (MRA) using wavelet, ridgelet, and curvelet transforms has been used in the proposed segmentation system. It is particularly a challenging task to classify cancers in human organs in scanners output using shape or gray-level information; organs shape changes throw different slices in medical stack and the gray-level intensity overlap in soft tissues. Curvelet transform is a new extension of wavelet and ridgelet transforms which aims to deal with interesting phenomena occurring along curves. Curvelet transforms has been tested on medical data sets, and results are compared with those obtained from the other transforms. Tests indicate that using curvelet significantly improves the classification of abnormal tissues in the scans and reduce the surrounding noise

    An Unsupervised Learning Model for Deformable Medical Image Registration

    Full text link
    We present a fast learning-based algorithm for deformable, pairwise 3D medical image registration. Current registration methods optimize an objective function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a convolutional neural network (CNN), and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201
    corecore