43 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Towards Deeper Understanding in Neuroimaging
Neuroimaging is a growing domain of research, with advances in machine learning having tremendous potential to expand understanding in neuroscience and improve public health. Deep neural networks have recently and rapidly achieved historic success in numerous domains, and as a consequence have completely redefined the landscape of automated learners, giving promise of significant advances in numerous domains of research. Despite recent advances and advantages over traditional machine learning methods, deep neural networks have yet to have permeated significantly into neuroscience studies, particularly as a tool for discovery. This dissertation presents well-established and novel tools for unsupervised learning which aid in feature discovery, with relevant applications to neuroimaging. Through our works within, this dissertation presents strong evidence that deep learning is a viable and important tool for neuroimaging studies
State-space model with deep learning for functional dynamics estimation in resting-state fMRI
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach
Deep Learning in Medical Image Analysis
The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements
Enhancing Breast Cancer Prediction Using Unlabeled Data
Selles vĂ€itekirjas esitatakse sildistamata andmeid kasutav sĂŒvaĂ”ppe lĂ€henemine rinna infiltratiivse duktaalse kartsinoomi koeregioonide automaatseks klassifitseerimiseks rinnavĂ€hi patoloogilistes digipreparaatides. SĂŒvaĂ”ppe meetodite tööpĂ”himĂ”te on sarnane inimajule, mis töötab samuti mitmetel tĂ”lgendustasanditel. Need meetodid on osutunud tulemuslikeks ka vĂ€ga keerukate probleemide nagu pildiliigituse ja esemetuvastuse lahendamisel, ĂŒletades seejuures varasemate lahendusviiside efektiivsust. SĂŒvaĂ”ppeks on aga vaja suurt hulka sildistatud andmeid, mida vĂ”ib olla keeruline saada, eriti veel meditsiinis, kuna nii haiglad kui ka patsiendid ei pruugi olla nĂ”us sedavĂ”rd delikaatset teavet loovutama. Lisaks sellele on masinĂ”ppesĂŒsteemide saavutatavate aina paremate tulemuste hinnaks nende sĂŒsteemide sisemise keerukuse kasv. Selle sisemise keerukuse tĂ”ttu muutub raskemaks ka nende sĂŒsteemide töö mĂ”istmine, mistĂ”ttu kasutajad ei kipu neid usaldama. Meditsiinilisi diagnoose ei saa jĂ€rgida pimesi, kuna see vĂ”ib endaga kaasa tuua patsiendi tervise kahjustamise. Mudeli mĂ”istetavuse tagamine on seega oluline viis sĂŒsteemi usaldatavuse tĂ”stmiseks, eriti just masinĂ”ppel pĂ”hinevate mudelite laialdasel rakendamisel sellistel kriitilise tĂ€htsusega aladel nagu seda on meditsiin. Infiltratiivne duktaalne kartsinoom on ĂŒks levinumaid ja ka agressiivsemaid rinnavĂ€hi vorme, moodustades peaaegu 80% kĂ”igist juhtumitest. Selle diagnoosimine on patoloogidele vĂ€ga keerukas ja ajakulukas ĂŒlesanne, kuna nĂ”uab vĂ”imalike pahaloomuliste kasvajate avastamiseks paljude healoomuliste piirkondade uurimist. Samas on infiltratiivse duktaalse kartsinoomi digipatoloogias tĂ€pne piiritlemine vĂ€hi agressiivsuse hindamise aspektist ĂŒlimalt oluline. KĂ€esolevas uurimuses kasutatakse konvolutsioonilist nĂ€rvivĂ”rku arendamaks vĂ€lja infiltratiivse duktaalse kartsinoomi diagnoosimisel rakendatav pooleldi juhitud Ă”ppe skeem. VĂ€lja pakutud raamistik suurendab esmalt vĂ€ikest sildistatud andmete hulka generatiivse vĂ”istlusliku vĂ”rgu loodud sĂŒnteetiliste meditsiiniliste kujutistega. SeejĂ€rel kasutatakse juba eelnevalt treenitud vĂ”rku, et selle suurendatud andmekogumi peal lĂ€bi viia kujutuvastus, misjĂ€rel sildistamata andmed sildistatakse andmesildistusalgoritmiga. Töötluse tulemusena saadud sildistatud andmeid eelmainitud konvolutsioonilisse nĂ€rvivĂ”rku sisestades saavutatakse rahuldav tulemus: ROC kĂ”vera alla jÀÀv pindala ja F1 skoor on vastavalt 0.86 ja 0.77. Lisaks sellele vĂ”imaldavad vĂ€lja pakutud mĂ”istetavuse tĂ”stmise tehnikad nĂ€ha ka meditsiinilistele prognooside otsuse tegemise protsessi seletust, mis omakorda teeb sĂŒsteemi usaldamise kasutajatele lihtsamaks. KĂ€esolev uurimus nĂ€itab, et konvolutsioonilise nĂ€rvivĂ”rgu tehtud otsuseid aitab paremini mĂ”ista see, kui kasutajatele visualiseeritakse konkreetse juhtumi puhul infiltratiivse duktaalse kartsinoomi positiivse vĂ”i negatiivse otsuse langetamisel sĂŒsteemi jaoks kĂ”ige olulisemaks osutunud piirkondi.The following thesis presents a deep learning (DL) approach for automatic classification of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BC) using unlabeled data. DL methods are similar to the way the human brain works across different interpretation levels. These techniques have shown to outperform traditional approaches of the most complex problems such as image classification and object detection. However, DL requires a broad set of labeled data that is difficult to obtain, especially in the medical field as neither the hospitals nor the patients are willing to reveal such sensitive information. Moreover, machine learning (ML) systems are achieving better performance at the cost of becoming increasingly complex. Because of that, they become less interpretable that causes distrust from the users. Model interpretability is a way to enhance trust in a system. It is a very desirable property, especially crucial with the pervasive adoption of ML-based models in the critical domains like the medical field. With medical diagnostics, the predictions cannot be blindly followed as it may result in harm to the patient. IDC is one of the most common and aggressive subtypes of all breast cancers accounting nearly 80% of them. Assessment of the disease is a very time-consuming and challenging task for pathologists, as it involves scanning large swatches of benign regions to identify an area of malignancy. Meanwhile, accurate delineation of IDC in WSI is crucial for the estimation of grading cancer aggressiveness. In the following study, a semi-supervised learning (SSL) scheme is developed using the deep convolutional neural network (CNN) for IDC diagnosis. The proposed framework first augments a small set of labeled data with synthetic medical images, generated by the generative adversarial network (GAN) that is followed by feature extraction using already pre-trained network on the larger dataset and a data labeling algorithm that labels a much broader set of unlabeled data. After feeding the newly labeled set into the proposed CNN model, acceptable performance is achieved: the AUC and the F-measure accounting for 0.86, 0.77, respectively. Moreover, proposed interpretability techniques produce explanations for medical predictions and build trust in the presented CNN. The following study demonstrates that it is possible to enable a better understanding of the CNN decisions by visualizing areas that are the most important for a particular prediction and by finding elements that are the reasons for IDC, Non-IDC decisions made by the network
Recommended from our members
Trends in Computer-Aided Diagnosis Using Deep 2 Learning Techniques: A Review of Recent Studies on 3 Algorithm Development 4
With recent focus on deep neural network architectures for development of algorithms for computer-aided diagnosis (CAD), we provide a review of studies within the last 3 years (2015-2017) reported in selected top journals and conferences. 29 studies that met our inclusion criteria were reviewed to identify trends in this field and to inform future development. Studies have focused mostly on cancer-related diseases within internal medicine while diseases within gender-/age-focused fields like gynaecology/pediatrics have not received much focus. All reviewed studies employed image datasets, mostly sourced from publicly available databases (55.2%) and few based on data from human subjects (31%) and non-medical datasets (13.8%), while CNN architecture was employed in most (70%) of the studies. Confirmation of the effect of data manipulation on quality of output and adoption of multi-class rather than binary classification also require more focus. Future studies should leverage collaborations with medical experts to aid future with actual clinical testing with reporting based on some generally applicable index to enable comparison. Our next steps on plans for CAD development for osteoarthritis (OA), with plans to consider multi-class classification and comparison across deep learning approaches and unsupervised architectures were also highlighted