43 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Towards Deeper Understanding in Neuroimaging

    Get PDF
    Neuroimaging is a growing domain of research, with advances in machine learning having tremendous potential to expand understanding in neuroscience and improve public health. Deep neural networks have recently and rapidly achieved historic success in numerous domains, and as a consequence have completely redefined the landscape of automated learners, giving promise of significant advances in numerous domains of research. Despite recent advances and advantages over traditional machine learning methods, deep neural networks have yet to have permeated significantly into neuroscience studies, particularly as a tool for discovery. This dissertation presents well-established and novel tools for unsupervised learning which aid in feature discovery, with relevant applications to neuroimaging. Through our works within, this dissertation presents strong evidence that deep learning is a viable and important tool for neuroimaging studies

    State-space model with deep learning for functional dynamics estimation in resting-state fMRI

    Get PDF
    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach

    Deep Learning in Medical Image Analysis

    Get PDF
    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements

    Enhancing Breast Cancer Prediction Using Unlabeled Data

    Get PDF
    Selles vĂ€itekirjas esitatakse sildistamata andmeid kasutav sĂŒvaĂ”ppe lĂ€henemine rinna infiltratiivse duktaalse kartsinoomi koeregioonide automaatseks klassifitseerimiseks rinnavĂ€hi patoloogilistes digipreparaatides. SĂŒvaĂ”ppe meetodite tööpĂ”himĂ”te on sarnane inimajule, mis töötab samuti mitmetel tĂ”lgendustasanditel. Need meetodid on osutunud tulemuslikeks ka vĂ€ga keerukate probleemide nagu pildiliigituse ja esemetuvastuse lahendamisel, ĂŒletades seejuures varasemate lahendusviiside efektiivsust. SĂŒvaĂ”ppeks on aga vaja suurt hulka sildistatud andmeid, mida vĂ”ib olla keeruline saada, eriti veel meditsiinis, kuna nii haiglad kui ka patsiendid ei pruugi olla nĂ”us sedavĂ”rd delikaatset teavet loovutama. Lisaks sellele on masinĂ”ppesĂŒsteemide saavutatavate aina paremate tulemuste hinnaks nende sĂŒsteemide sisemise keerukuse kasv. Selle sisemise keerukuse tĂ”ttu muutub raskemaks ka nende sĂŒsteemide töö mĂ”istmine, mistĂ”ttu kasutajad ei kipu neid usaldama. Meditsiinilisi diagnoose ei saa jĂ€rgida pimesi, kuna see vĂ”ib endaga kaasa tuua patsiendi tervise kahjustamise. Mudeli mĂ”istetavuse tagamine on seega oluline viis sĂŒsteemi usaldatavuse tĂ”stmiseks, eriti just masinĂ”ppel pĂ”hinevate mudelite laialdasel rakendamisel sellistel kriitilise tĂ€htsusega aladel nagu seda on meditsiin. Infiltratiivne duktaalne kartsinoom on ĂŒks levinumaid ja ka agressiivsemaid rinnavĂ€hi vorme, moodustades peaaegu 80% kĂ”igist juhtumitest. Selle diagnoosimine on patoloogidele vĂ€ga keerukas ja ajakulukas ĂŒlesanne, kuna nĂ”uab vĂ”imalike pahaloomuliste kasvajate avastamiseks paljude healoomuliste piirkondade uurimist. Samas on infiltratiivse duktaalse kartsinoomi digipatoloogias tĂ€pne piiritlemine vĂ€hi agressiivsuse hindamise aspektist ĂŒlimalt oluline. KĂ€esolevas uurimuses kasutatakse konvolutsioonilist nĂ€rvivĂ”rku arendamaks vĂ€lja infiltratiivse duktaalse kartsinoomi diagnoosimisel rakendatav pooleldi juhitud Ă”ppe skeem. VĂ€lja pakutud raamistik suurendab esmalt vĂ€ikest sildistatud andmete hulka generatiivse vĂ”istlusliku vĂ”rgu loodud sĂŒnteetiliste meditsiiniliste kujutistega. SeejĂ€rel kasutatakse juba eelnevalt treenitud vĂ”rku, et selle suurendatud andmekogumi peal lĂ€bi viia kujutuvastus, misjĂ€rel sildistamata andmed sildistatakse andmesildistusalgoritmiga. Töötluse tulemusena saadud sildistatud andmeid eelmainitud konvolutsioonilisse nĂ€rvivĂ”rku sisestades saavutatakse rahuldav tulemus: ROC kĂ”vera alla jÀÀv pindala ja F1 skoor on vastavalt 0.86 ja 0.77. Lisaks sellele vĂ”imaldavad vĂ€lja pakutud mĂ”istetavuse tĂ”stmise tehnikad nĂ€ha ka meditsiinilistele prognooside otsuse tegemise protsessi seletust, mis omakorda teeb sĂŒsteemi usaldamise kasutajatele lihtsamaks. KĂ€esolev uurimus nĂ€itab, et konvolutsioonilise nĂ€rvivĂ”rgu tehtud otsuseid aitab paremini mĂ”ista see, kui kasutajatele visualiseeritakse konkreetse juhtumi puhul infiltratiivse duktaalse kartsinoomi positiivse vĂ”i negatiivse otsuse langetamisel sĂŒsteemi jaoks kĂ”ige olulisemaks osutunud piirkondi.The following thesis presents a deep learning (DL) approach for automatic classification of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BC) using unlabeled data. DL methods are similar to the way the human brain works across different interpretation levels. These techniques have shown to outperform traditional approaches of the most complex problems such as image classification and object detection. However, DL requires a broad set of labeled data that is difficult to obtain, especially in the medical field as neither the hospitals nor the patients are willing to reveal such sensitive information. Moreover, machine learning (ML) systems are achieving better performance at the cost of becoming increasingly complex. Because of that, they become less interpretable that causes distrust from the users. Model interpretability is a way to enhance trust in a system. It is a very desirable property, especially crucial with the pervasive adoption of ML-based models in the critical domains like the medical field. With medical diagnostics, the predictions cannot be blindly followed as it may result in harm to the patient. IDC is one of the most common and aggressive subtypes of all breast cancers accounting nearly 80% of them. Assessment of the disease is a very time-consuming and challenging task for pathologists, as it involves scanning large swatches of benign regions to identify an area of malignancy. Meanwhile, accurate delineation of IDC in WSI is crucial for the estimation of grading cancer aggressiveness. In the following study, a semi-supervised learning (SSL) scheme is developed using the deep convolutional neural network (CNN) for IDC diagnosis. The proposed framework first augments a small set of labeled data with synthetic medical images, generated by the generative adversarial network (GAN) that is followed by feature extraction using already pre-trained network on the larger dataset and a data labeling algorithm that labels a much broader set of unlabeled data. After feeding the newly labeled set into the proposed CNN model, acceptable performance is achieved: the AUC and the F-measure accounting for 0.86, 0.77, respectively. Moreover, proposed interpretability techniques produce explanations for medical predictions and build trust in the presented CNN. The following study demonstrates that it is possible to enable a better understanding of the CNN decisions by visualizing areas that are the most important for a particular prediction and by finding elements that are the reasons for IDC, Non-IDC decisions made by the network
    corecore