595 research outputs found

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Incorporating Deep Learning Techniques into Outcome Modeling in Non-Small Cell Lung Cancer Patients after Radiation Therapy

    Full text link
    Radiation therapy (radiotherapy) together with surgery, chemotherapy, and immunotherapy are common modalities in cancer treatment. In radiotherapy, patients are given high doses of ionizing radiation which is aimed at killing cancer cells and shrinking tumors. Conventional radiotherapy usually gives a standard prescription to all the patients, however, as patients are likely to have heterogeneous responses to the treatment due to multiple prognostic factors, personalization of radiotherapy treatment is desirable. Outcome models can serve as clinical decision-making support tools in the personalized treatment, helping evaluate patients’ treatment options before the treatment or during fractionated treatment. It can further provide insights into designing of new clinical protocols. In the outcome modeling, two indices including tumor control probability (TCP) and normal tissue complication probability (NTCP) are usually investigated. Current outcome models, e.g., analytical models and data-driven models, either fail to take into account complex interactions between physical and biological variables or require complicated feature selection procedures. Therefore, in our studies, deep learning (DL) techniques are incorporated into outcome modeling for prediction of local control (LC), which is TCP in our case, and radiation pneumonitis (RP), which is NTCP in our case, in non-small-cell lung cancer (NSCLC) patients after radiotherapy. These techniques can improve the prediction performance of outcomes and simplify model development procedures. Additionally, longitudinal data association, actuarial prediction, and multi-endpoints prediction are considered in our models. These were carried out in 3 consecutive studies. In the first study, a composite architecture consisting of variational auto-encoder (VAE) and multi-layer perceptron (MLP) was investigated and applied to RP prediction. The architecture enabled the simultaneous dimensionality reduction and prediction. The novel VAE-MLP joint architecture with area under receiver operative characteristics (ROC) curve (AUC) [95% CIs] 0.781 [0.737-0.808] outperformed a strategy which involves separate VAEs and classifiers (AUC 0.624 [ 0.577-0.658]). In the second study, composite architectures consisted of 1D convolutional layer/ locally-connected layer and MLP that took into account longitudinal associations were applied to predict LC. Composite architectures convolutional neural network (CNN)-MLP that can model both longitudinal and non-longitudinal data yielded an AUC 0.832 [ 0.807-0.841]. While plain MLP only yielded an AUC 0.785 [CI: 0.752-0.792] in LC control prediction. In the third study, rather than binary classification, time-to-event information was also incorporated for actuarial prediction. DL architectures ADNN-DVH which consider dosimetric information, ADNN-com which further combined biological and imaging data, and ADNN-com-joint which realized multi-endpoints prediction were investigated. Analytical models were also conducted for comparison purposes. Among all the models, ADNN-com-joint performed the best, yielding c-indexes of 0.705 [0.676-0.734] for RP2, 0.740 [0.714-0.765] for LC and an AU-FROC 0.720 [0.671-0.801] for joint prediction. The performance of proposed models was also tested on a cohort of newly-treated patients and multi-institutional RTOG0617 datasets. These studies taken together indicate that DL techniques can be utilized to improve the performance of outcome models and potentially provide guidance to physicians during decision making. Specifically, a VAE-MLP joint architectures can realize simultaneous dimensionality reduction and prediction, boosting the performance of conventional outcome models. A 1D CNN-MLP joint architecture can utilize temporal-associated variables generated during the span of radiotherapy. A DL model ADNN-com-joint can realize multi-endpoint prediction, which allows considering competing risk factors. All of those contribute to a step toward enabling outcome models as real clinical decision support tools.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162923/1/sunan_1.pd

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Double Backpropagation with Applications to Robustness and Saliency Map Interpretability

    Get PDF
    This thesis is concerned with works in connection to double backpropagation, which is a phenomenon that arises when first-order optimization methods are applied to a neural network's loss function, if this contains derivatives. Its connection to robustness and saliency map interpretability is explained

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Solving inverse problems for medical applications

    Get PDF
    It is essential to have an accurate feedback system to improve the navigation of surgical tools. This thesis investigates how to solve inverse problems using the example of two medical prototypes. The first aims to detect the Sentinel Lymph Node (SLN) during the biopsy. This will allow the surgeon to remove the SLN with a small incision, reducing trauma to the patient. The second investigates how to extract depth and tissue characteristic information during bone ablation using the emitted acoustic wave. We solved inverse problems to find our desired solution. For this purpose, we investigated three approaches: In Chapter 3, we had a good simulation of the forward problem; namely, we used a fingerprinting algorithm. Therefore, we compared the measurement with the simulations of the forward problem, and the simulation that was most similar to the measurement was a good approximation. To do so, we used a dictionary of solutions, which has a high computational speed. However, depending on how fine the grid is, it takes a long time to simulate all the solutions of the forward problem. Therefore, a lot of memory is needed to access the dictionary. In Chapter 4, we examined the Adaptive Eigenspace method for solving the Helmholtz equation (Fourier transformed wave equation). Here we used a Conjugate quasi-Newton (CqN) algorithm. We solved the Helmholtz equation and reconstructed the source shape and the medium velocity by using the acoustic wave at the boundary of the area of interest. We accomplished this in a 2D model. We note, that the computation for the 3D model was very long and expensive. In addition, we simplified some conditions and could not confirm the results of our simulations in an ex-vivo experiment. In Chapter 5, we developed a different approach. We conducted multiple experiments and acquired many acoustic measurements during the ablation process. Then we trained a Neural Network (NN) to predict the ablation depth in an end-to-end model. The computational cost of predicting the depth is relatively low once the training is complete. An end-to-end network requires almost no pre-processing. However, there were some drawbacks, e.g., it is cumbersome to obtain the ground truth. This thesis has investigated several approaches to solving inverse problems in medical applications. From Chapter 3 we conclude that if the forward problem is well known, we can drastically improve the speed of the algorithm by using the fingerprinting algorithm. This is ideal for reconstructing a position or using it as a first guess for more complex reconstructions. The conclusion of Chapter 4 is that we can drastically reduce the number of unknown parameters using Adaptive Eigenspace method. In addition, we were able to reconstruct the medium velocity and the acoustic wave generator. However, the model is expensive for 3D simulations. Also, the number of transducers required for the setup was not applicable to our intended setup. In Chapter 5 we found a correlation between the depth of the laser cut and the acoustic wave using only a single air-coupled transducer. This encourages further investigation to characterize the tissue during the ablation process
    • …
    corecore