41 research outputs found

    Deep Multi-instance Networks with Sparse Label Assignment for Whole Mammogram Classification

    Full text link
    Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods rely on regions of interest (ROIs) which require great efforts to annotate. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning (MIL) for labeling a set of instances/patches, we propose end-to-end trained deep multi-instance networks for mass classification based on whole mammogram without the aforementioned ROIs. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed networks compared to previous work using segmentation and detection annotations.Comment: MICCAI 2017 Camera Read

    SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images

    Full text link
    Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images. Inspired by the recent success of Vision Transformers and advances in multi-modal image analysis, we propose a novel segmentation model, debuted, Cross-Modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions.To validate the effectiveness of the proposed method, we performed experiments on the HECKTOR 2021 challenge dataset and compared it with the nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based methods such as UNETR, and Swin UNETR. The proposed method is experimentally shown to outperform these comparing methods thanks to the ability of the CMA module to capture better inter-modality complimentary feature representations between PET and CT, for the task of head-and-neck tumor segmentation.Comment: 9 pages, 3 figures. Med Phys. 202

    Automated 5-year Mortality Prediction using Deep Learning and Radiomics Features from Chest Computed Tomography

    Full text link
    We propose new methods for the prediction of 5-year mortality in elderly individuals using chest computed tomography (CT). The methods consist of a classifier that performs this prediction using a set of features extracted from the CT image and segmentation maps of multiple anatomic structures. We explore two approaches: 1) a unified framework based on deep learning, where features and classifier are automatically learned in a single optimisation process; and 2) a multi-stage framework based on the design and selection/extraction of hand-crafted radiomics features, followed by the classifier learning process. Experimental results, based on a dataset of 48 annotated chest CTs, show that the deep learning model produces a mean 5-year mortality prediction accuracy of 68.5%, while radiomics produces a mean accuracy that varies between 56% to 66% (depending on the feature selection/extraction method and classifier). The successful development of the proposed models has the potential to make a profound impact in preventive and personalised healthcare.Comment: 9 page

    Matching of Mammographic Lesions in Different Breast Projections

    Get PDF
    De todos os cancros, cancro da mama é o que causa mais mortes entre mulheres. Programas de rastreio do cancro da mama podem ajudar a decrescer esta mortalidade, visto que deteção e tratamento do tumor em fases iniciais aumentam a taxa de sobrevivência. Normalmente, um par de radiologistas fazem a interpretação das mamografias, no entanto o processo é longo e cansativo. Isto incentivou o desenvolvimento de sistemas de diagnósitco auxiliado por computador (CADx), para substituir o segundo radiologista, fazendo melhor uso do tempo de especialistas. No entanto, sistemas CADx são associados a taxas elevadas de falsos positivos, dado que a maior parte detes apenas usam uma vista (craniocaudal ou mediolateral oblique) da mamografia. O radiologista, por sua vez, usa ambas as projeções, baseando o seu diagnóstico em diferenças visíveis entre as duas vistas. Quando se consideram as duas projeções da mamografia, a correspondência de lesões é um passo necessário para se fazer o diagnóstico. No entanto, isto é uma tarefa complexa, dado que podem existir vários candidatos a lesão, em cada uma das vistas, para se fazer correspondência. Neste trabalho, um sistema que faz correspondências entre lesões é proposto. Este é composto por três blocos: detetor de candidatos, extração de caraterísticas e correspondência de lesões. O primeiro é uma replicação do trabalho de Ribli et al., e o seu propósito é detetar possíveis candidatos a lesão. O segundo é a extração de vetores de caraterísticas de cada candidato, quer usando a backbone do detetor de candidatos, quer extraindo caraterísticas mais tradicionais, ou usando uma rede neuronal treinada com a triplet loss para distinguir lesões. O terceiro é o cálculo da distância entre os vetores de caraterísticas, usando também heurísticas para restringir possíveis pares de candidatos incorretos, e a ordenação de distâncias para atribuir a correspondência de cada lesão. Este trabalho oferece várias opções de possíveis extractores de caraterísticas e heurísticas a serem incroporados num sistema CADx que seja baseado em detetores de objetos. O facto do modelo treinado com a triplet loss ser competitivo com os restantos modelos, torna o sistema bastante mais viável, sendo que este oferece a possibilidade de a correspondência ser independente da deteção de candidatos. Heurísticas "hard" e "soft" são introduzidas como métodos para limitar correspondências. O sistema é capaz de fazer correspondências de forma satisfatória, dado que a sua exatidão ( 70%85%) é significativamente maior que a probabilidade aleatória (30%40%) dos dados usados. Heurísticas "hard" têm resultados encorajantes na precision@k, dado que estas rejeitam um número significativo de falsos positivos gerados pelo detetor de lesões.Of all cancer diseases, breast cancer is the most lethal among women. It has been shown that breast cancer screening programs can decrease mortality, since early detection increases the chances of survival. Usually, a pair of radiologists interpret the screening mammograms, however the process is long and exhausting. This has encouraged the development of computer aided diagnosis (CADx) systems to replace the second radiologist, making a better use of human-experts' time. But CADx systems are associated with high false positive rates, since most of them only use one view (craniocaudal or mediolateral oblique) of the screening mammogram. Radiologist, on the other hand, use both views; frequently reasoning about the diagnosis by noticeable differences between the two views. When considering both projections of a mammogram, lesion matching is a necessary step to perform diagnosis. However this is a complex task, since there might be various lesion candidates on both projections to match. In this work, a matching system is proposed. The system is a cascade of three blocks: candidates detector, feature extraction and lesion matching. The first is a replication of Ribli et al.'s Faster R-CNN and its purpose is to find possible lesion candidates. The second is the feature vector extraction of each candidate, either by using the candidates detector's backbone, handcrafted features or a siamese network model trained for distinguish lesions. The third is the calculus of the distance between feature vector, also using some heuristics to restrain possible non-lesion pairs, and the ranking of the distances to match the lesions. This work provides several options of possible feature extractors and heuristics to be incorporated into a CADx system based on object detectors. The fact that the triplet loss trained models obtained competitive results with the other features extractors is valuable, since it offers some independence between the detection and matching tasks. "Hard" heuristics and "soft" heurisitcs are introduced as methods to restrain matching. The system is able to detect matches satisfactorily, since its accuracy (70%85%) is significantly higher than chance level (30%40%). "Hard" heuristics proposals achieved encouraging results on precision@k, due to its match and candidates exclusion methods, which rejects a significant number of false positives generated by the object detector

    Studies on deep learning approach in breast lesions detection and cancer diagnosis in mammograms

    Get PDF
    Breast cancer accounts for the largest proportion of newly diagnosed cancers in women recently. Early diagnosis of breast cancer can improve treatment outcomes and reduce mortality. Mammography is convenient and reliable, which is the most commonly used method for breast cancer screening. However, manual examinations are limited by the cost and experience of radiologists, which introduce a high false positive rate and false examination. Therefore, a high-performance computer-aided diagnosis (CAD) system is significant for lesions detection and cancer diagnosis. Traditional CADs for cancer diagnosis require a large number of features selected manually and remain a high false positive rate. The methods based on deep learning can automatically extract image features through the network, but their performance is limited by the problems of multicenter data biases, the complexity of lesion features, and the high cost of annotations. Therefore, it is necessary to propose a CAD system to improve the ability of lesion detection and cancer diagnosis, which is optimized for the above problems. This thesis aims to utilize deep learning methods to improve the CADs' performance and effectiveness of lesion detection and cancer diagnosis. Starting from the detection of multi-type lesions using deep learning methods based on full consideration of characteristics of mammography, this thesis explores the detection method of microcalcification based on multiscale feature fusion and the detection method of mass based on multi-view enhancing. Then, a classification method based on multi-instance learning is developed, which integrates the detection results from the above methods, to realize the precise lesions detection and cancer diagnosis in mammography. For the detection of microcalcification, a microcalcification detection network named MCDNet is proposed to overcome the problems of multicenter data biases, the low resolution of network inputs, and scale differences between microcalcifications. In MCDNet, Adaptive Image Adjustment mitigates the impact of multicenter biases and maximizes the input effective pixels. Then, the proposed pyramid network with shortcut connections ensures that the feature maps for detection contain more precise localization and classification information about multiscale objects. In the structure, trainable Weighted Feature Fusion is proposed to improve the detection performance of both scale objects by learning the contribution of feature maps in different stages. The experiments show that MCDNet outperforms other methods on robustness and precision. In case the average number of false positives per image is 1, the recall rates of benign and malignant microcalcification are 96.8% and 98.9%, respectively. MCDNet can effectively help radiologists detect microcalcifications in clinical applications. For the detection of breast masses, a weakly supervised multi-view enhancing mass detection network named MVMDNet is proposed to solve the lack of lesion-level labels. MVMDNet can be trained on the image-level labeled dataset and extract the extra localization information by exploring the geometric relation between multi-view mammograms. In Multi-view Enhancing, Spatial Correlation Attention is proposed to extract correspondent location information between different views while Sigmoid Weighted Fusion module fuse diagnostic and auxiliary features to improve the precision of localization. CAM-based Detection module is proposed to provide detections for mass through the classification labels. The results of experiments on both in-house dataset and public dataset, [email protected] and [email protected] (recall rate@average number of false positive per image), demonstrate MVMDNet achieves state-of-art performances among weakly supervised methods and has robust generalization ability to alleviate the multicenter biases. In the study of cancer diagnosis, a breast cancer classification network named CancerDNet based on Multi-instance Learning is proposed. CancerDNet successfully solves the problem that the features of lesions are complex in whole image classification utilizing the lesion detection results from the previous chapters. Whole Case Bag Learning is proposed to combined the features extracted from four-view, which works like a radiologist to realize the classification of each case. Low-capacity Instance Learning and High-capacity Instance Learning successfully integrate the detections of multi-type lesions into the CancerDNet, so that the model can fully consider lesions with complex features in the classification task. CancerDNet achieves the AUC of 0.907 and AUC of 0.925 on the in-house and the public datasets, respectively, which is better than current methods. The results show that CancerDNet achieves a high-performance cancer diagnosis. In the works of the above three parts, this thesis fully considers the characteristics of mammograms and proposes methods based on deep learning for lesions detection and cancer diagnosis. The results of experiments on in-house and public datasets show that the methods proposed in this thesis achieve the state-of-the-art in the microcalcifications detection, masses detection, and the case-level classification of cancer and have a strong ability of multicenter generalization. The results also prove that the methods proposed in this thesis can effectively assist radiologists in making the diagnosis while saving labor costs
    corecore