12 research outputs found

    Feature learning based on connectivity estimation for unbiased mammography mass classification

    Get PDF
    Breast cancer is the most commonly diagnosed female malignancy worldwide. Recent developments in deep convolutional neural networks have shown promising performance for breast cancer detection and classification. However, due to variations in appearance and small datasets, biased features can be learned by the networks in distinguishing malignant and benign instances. To investigate these aspects, we trained a densely connected convolutional network (DenseNet) to obtain representative features of breast tissue, selecting texture features representing different physical morphological representations as the network's inputs. Connectivity estimation, represented by a connection matrix, is proposed for feature learning. To make the network provide an unbiased prediction, we used k-nearest neighbors to find k training samples whose connection matrices are closest to the test case. When evaluated on OMI-DB we achieved improved diagnostic accuracy 73.89±2.89% compared with 71.35±2.66% for the initial CNN model, which showed a statistically significant difference (p=0.00036). The k training samples can provide visual explanations which are useful in understanding the model predictions and failures of the model.</p

    Deep Learning for Medical Imaging in a Biased Environment

    Get PDF
    Deep learning (DL) based applications have successfully solved numerous problems in machine perception. In radiology, DL-based image analysis systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists\u27 workflow. However, many DL-based radiological systems fail to generalize when deployed in new hospital settings, and the causes of these failures are not always clear. Although significant effort continues to be invested in applying DL algorithms to radiological data, many open questions and issues that arise from incomplete datasets remain. To bridge the gap, we first review the current state of artificial intelligence applied to radiology data, followed by juxtaposing the use of classical computer vision features (i.e., hand-crafted features) with the recent advances caused by deep learning. However, using DL is not an excuse for a lack of rigorous study design, which we demonstrate by proposing sanity tests that determine when a DL system is right for the wrong reasons. Having established the appropriate way to assess DL systems, we then turn to improve their efficacy and generalizability by leveraging prior information about human physiology and data derived from dual energy computed tomography scans. In this dissertation, we address the gaps in the radiology literature by introducing new tools, testing strategies, and methods to mitigate the influence of dataset biases

    Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging

    Get PDF
    Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Irish Machine Vision and Image Processing Conference, Proceedings

    Get PDF
    corecore