46 research outputs found

    Magnification-independent Histopathological Image Classification with Similarity-based Multi-scale Embeddings

    Full text link
    The classification of histopathological images is of great value in both cancer diagnosis and pathological studies. However, multiple reasons, such as variations caused by magnification factors and class imbalance, make it a challenging task where conventional methods that learn from image-label datasets perform unsatisfactorily in many cases. We observe that tumours of the same class often share common morphological patterns. To exploit this fact, we propose an approach that learns similarity-based multi-scale embeddings (SMSE) for magnification-independent histopathological image classification. In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets. The learned embeddings provide accurate measurements of similarities between images, which are regarded as a more effective form of representation for histopathological morphology than normal image features. Furthermore, in order to ensure the generated models are magnification-independent, images acquired at different magnification factors are simultaneously fed to networks during training for learning multi-scale embeddings. In addition to the SMSE, to eliminate the impact of class imbalance, instead of using the hard sample mining strategy that intuitively discards some easy samples, we introduce a new reinforced focal loss to simultaneously punish hard misclassified samples while suppressing easy well-classified samples. Experimental results show that the SMSE improves the performance for histopathological image classification tasks for both breast and liver cancers by a large margin compared to previous methods. In particular, the SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods using traditional features

    CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation

    Full text link
    Accurate tooth volume segmentation is a prerequisite for computer-aided dental analysis. Deep learning-based tooth segmentation methods have achieved satisfying performances but require a large quantity of tooth data with ground truth. The dental data publicly available is limited meaning the existing methods can not be reproduced, evaluated and applied in clinical practice. In this paper, we establish a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes. We further evaluate several state-of-the-art tooth volume segmentation strategies based on fully-supervised learning, semi-supervised learning and active learning, and define the performance principles. This work provides a new benchmark for the tooth volume segmentation task, and the experiment can serve as the baseline for future AI-based dental imaging research and clinical application development

    Targeted Inactivation of Kinesin-1 in Pancreatic β-Cells In Vivo Leads to Insulin Secretory Deficiency

    Get PDF
    Suppression of Kinesin-1 by antisense oligonucleotides, or overexpression of dominant-negative acting kinesin heavy chain, has been reported to affect the sustained phase of glucose-stimulated insulin secretion in β-cells in vitro. In this study, we examined the in vivo physiological role of Kinesin-1 in β-cell development and function

    POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation

    Get PDF
    Intravascular ultrasound (IVUS) is recommended in guiding coronary intervention. The segmentation of coronary lumen and external elastic membrane (EEM) borders in IVUS images is a key step, but the manual process is time-consuming and error-prone, and suffers from inter-observer variability. In this paper, we propose a novel perceptual organisation-aware selective transformer framework that can achieve accurate and robust segmentation of the vessel walls in IVUS images. In this framework, temporal context-based feature encoders extract efficient motion features of vessels. Then, a perceptual organisation-aware selective transformer module is proposed to extract accurate boundary information, supervised by a dedicated boundary loss. The obtained EEM and lumen segmentation results will be fused in a temporal constraining and fusion module, to determine the most likely correct boundaries with robustness to morphology. Our proposed methods are extensively evaluated in non-selected IVUS sequences, including normal, bifurcated, and calcified vessels with shadow artifacts. The results show that the proposed methods outperform the state-of-the-art, with a Jaccard measure of 0.92 for lumen and 0.94 for EEM on the IVUS 2011 open challenge dataset. This work has been integrated into a software QCU-CMS2 to automatically segment IVUS images in a user-friendly environment
    corecore