19 research outputs found

    FetusMapV2: Enhanced Fetal Pose Estimation in 3D Ultrasound

    Full text link
    Fetal pose estimation in 3D ultrasound (US) involves identifying a set of associated fetal anatomical landmarks. Its primary objective is to provide comprehensive information about the fetus through landmark connections, thus benefiting various critical applications, such as biometric measurements, plane localization, and fetal movement monitoring. However, accurately estimating the 3D fetal pose in US volume has several challenges, including poor image quality, limited GPU memory for tackling high dimensional data, symmetrical or ambiguous anatomical structures, and considerable variations in fetal poses. In this study, we propose a novel 3D fetal pose estimation framework (called FetusMapV2) to overcome the above challenges. Our contribution is three-fold. First, we propose a heuristic scheme that explores the complementary network structure-unconstrained and activation-unreserved GPU memory management approaches, which can enlarge the input image resolution for better results under limited GPU memory. Second, we design a novel Pair Loss to mitigate confusion caused by symmetrical and similar anatomical structures. It separates the hidden classification task from the landmark localization task and thus progressively eases model learning. Last, we propose a shape priors-based self-supervised learning by selecting the relatively stable landmarks to refine the pose online. Extensive experiments and diverse applications on a large-scale fetal US dataset including 1000 volumes with 22 landmarks per volume demonstrate that our method outperforms other strong competitors.Comment: 16 pages, 11 figures, accepted by Medical Image Analysis(2023

    Segment Anything Model for Medical Images?

    Full text link
    The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It designed a novel promotable segmentation task, ensuring zero-shot image segmentation using the pre-trained model via two main modes including automatic everything and manual prompt. SAM has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging due to the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. SAM has achieved impressive results on various natural image segmentation tasks. Meanwhile, zero-shot and efficient MIS can well reduce the annotation time and boost the development of medical image analysis. Hence, SAM seems to be a potential tool and its performance on large medical datasets should be further validated. We collected and sorted 52 open-source datasets, and build a large medical segmentation dataset with 16 modalities, 68 objects, and 553K slices. We conducted a comprehensive analysis of different SAM testing strategies on the so-called COSMOS 553K dataset. Extensive experiments validate that SAM performs better with manual hints like points and boxes for object perception in medical images, leading to better performance in prompt mode compared to everything mode. Additionally, SAM shows remarkable performance in some specific objects and modalities, but is imperfect or even totally fails in other situations. Finally, we analyze the influence of different factors (e.g., the Fourier-based boundary complexity and size of the segmented objects) on SAM's segmentation performance. Extensive experiments validate that SAM's zero-shot segmentation capability is not sufficient to ensure its direct application to the MIS.Comment: 23 pages, 14 figures, 12 table

    Saliency-based segmentation of optic disc in retinal images

    No full text
    Abstract Accurate segmentation of optic disc (OD) is significant for the automation of retinal analysis and retinal diseases screening. This paper proposes a novel optic disc segmentation method based on the saliency. It includes two stages:optic disc location and saliency-based segmentation. In the location stage, the OD is detected using a matched template and the density of the vessels. In the segmentation stage, we treat the OD as the salient object and formulate it as a saliency detection problem. To measure the saliency of a region, the boundary prior and the connectivity prior are exploited. Then geodesic distance to the window boundary is computed to measure the cost the region spends to reach the window boundary. After a threshold and ellipse fitting, we obtain the OD. Experimental results on two public databases for OD segmentation show that the proposed method achieves the- state-of-the-art performance

    A location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images

    No full text
    Abstract The automatic exudate segmentation in colour retinal fundus images is an important task in computer aided diagnosis and screening systems for diabetic retinopathy. In this paper, we present a location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images, which includes three stages: anatomic structure removal, exudate location and exudate segmentation. In anatomic structure removal stage, matched filters based main vessels segmentation method and a saliency based optic disk segmentation method are proposed. The main vessel and optic disk are then removed to eliminate the adverse affects that they bring to the second stage. In the location stage, we learn a random forest classifier to classify patches into two classes: exudate patches and exudate-free patches, in which the histograms of completed local binary patterns are extracted to describe the texture structures of the patches. Finally, the local variance, the size prior about the exudate regions and the local contrast prior are used to segment the exudate regions out from patches which are classified as exudate patches in the location stage. We evaluate our method both at exudate-level and image-level. For exudate-level evaluation, we test our method on e-ophtha EX dataset, which provides pixel level annotation from the specialists. The experimental results show that our method achieves 76% in sensitivity and 75% in positive prediction value (PPV), which both outperform the state of the art methods significantly. For image-level evaluation, we test our method on DiaRetDB1, and achieve competitive performance compared to the state of the art methods

    Retinal vessel segmentation from simple to difficult

    Get PDF
    Abstract In this paper, we propose two vesselness maps and a simple to difficult learning framework for retinal vessel segmentation which is ground truth free. The first vesselness map is the multiscale centrelineboundary contrast map which is inspired by the appearance of vessels. The other is the difference of diffusion map which measures the difference of the diffused image and the original one. Meanwhile, two existing vesselness maps are generated. Totally, 4 vesselness maps are generated. In each vesselness map, pixels with large vesselness values are regarded as positive samples. Pixels around the positive samples with small vesselness values are regarded as negative samples. Then we learn a strong classifier for the retinal image based on other 3 vesselness maps to determine the pixels with mediocre values in single vesselness map. Finally, pixels with two classifier supports are labelled as vessel pixels. The experimental results on DRIVE and STARE show that our method outperforms the state-of-the-art unsupervised methods and achieves competitive performances to supervised methods

    Functional Characterisation of Anticancer Activity in the Aqueous Extract of Helicteres angustifolia L. Roots.

    No full text
    Helicteres angustifolia L. is a shrub that forms a common ingredient of several cancer treatment recipes in traditional medicine system both in China and Laos. In order to investigate molecular mechanisms of its anticancer activity, we prepared aqueous extract of Helicteres angustifolia L. Roots (AQHAR) and performed several in vitro assays using human normal fibroblasts (TIG-3) and osteosarcoma (U2OS). We found that AQHAR caused growth arrest/apoptosis of U2OS cells in a dose-dependent manner. It showed no cytotoxicity to TIG-3 cells at doses up to 50 μg/ml. Biochemical, imaging and cell cycle analyses revealed that it induces ROS signaling and DNA damage response selectively in cancer cells. The latter showed upregulation of p53, p21 and downregulation of Cyclin B1 and phospho-Rb. Furthermore, AQHAR-induced apoptosis was mediated by increase in pro-apoptotic proteins including cleaved PARP, caspases and Bax. Anti-apoptotic protein Bcl-2 showed decrease in AQHAR-treated U2OS cells. In vivo xenograft tumor assays in nude mice revealed dose-dependent suppression of tumor growth and lung metastasis with no toxicity to the animals suggesting that AQHAR could be a potent and safe natural drug for cancer treatment

    Cell cycle analysis of control and AQHAR treated cancer and normal human cells.

    No full text
    <p>Cell cycle distribution of control and AQHAR-treated U2OS (A and C) and TIG-3 (B and D) cells are shown. Results from three independent experiments are represented as mean ± SD. *<i>p</i><0.05 denotes statistically significance difference between the control and treated groups.</p

    Effect of AQHAR on hnRNP-K and mortalin, tumor metastasis mediating proteins, in A549 and U2OS cells.

    No full text
    <p>(A) Western blot analysis for hnRNP-K and mortalin showing decrease in hnRNP-K. (B) Immunostaining of showing decrease in nuclear hnRNP-K and clustering of mortalin staining. Results were represented as mean ± SD of three independent experiments. *<i>p</i><0.05 denotes the statistically significant difference between the treated and control groups.</p
    corecore