1,151 research outputs found

    InteligĂȘncia Artificial em Radiologia: Do Processamento de Imagem ao DiagnĂłstico

    Get PDF
    The objective of this article is to present a view on the potential impact of Artificial Intelligence (AI) on processing medical images, in particular in relation to diagnostic. This topic is currently attracting major attention in both the medical and engineering communities, as demonstrated by the number of recent tutorials [1-3] and review articles [4-6] that address it, with large research hospitals, as well as engineering research centers contributing to the area. Furthermore, several large companies like General Electric (GE), IBM/Merge, Siemens, Philips or Agfa, as well as more specialized companies and startups are integrating AI into their medical imaging products. The evolution of GE in this respect is interesting. GE SmartSignal software was developed for industrial applications to identify impending equipment failures well before they happen. As written in the GE prospectus, with this added lead time, one can transform from reactive maintenance to a more proactive maintenance process, allowing the workforce to focus on fixing problems rather than looking for them. With this background experience from the industrial field, GE developed predictive analytics products for clinical imaging, that embodied the Predictive component of P4 medicine (predictive, personalized, preventive, participatory). Another interesting example is the Illumeo software from Philips that embeds adaptive intelligence, i. e. the capacity to improve its automatic reasoning process from its past experience, to automatically pop out related prior exams for radiology in face of a concrete situation. Actually, with its capacity to tackle massive amounts of data of different sorts (imaging data, patient exam reports, pathology reports, patient monitoring signals, data from implantable electrophysiology devices, and data from many other sources) AI is certainly able to yield a decisive contribution to all the components of P4 medicine. For instance, in the presence of a rare disease, AI methods have the capacity to review huge amounts of prior information when confronted to the patient clinical data

    Vec2SPARQL:integrating SPARQL queries and knowledge graph embeddings

    Get PDF
    <div>Recent developments in machine learning have led to a rise of large</div><div>number of methods for extracting features from structured data. The features</div><div>are represented as vectors and may encode for some semantic aspects of data.</div><div>They can be used in a machine learning models for different tasks or to com-</div><div>pute similarities between the entities of the data. SPARQL is a query language</div><div>for structured data originally developed for querying Resource Description Frame-</div><div>work (RDF) data. It has been in use for over a decade as a standardized NoSQL</div><div>query language. Many different tools have been developed to enable data shar-</div><div>ing with SPARQL. For example, SPARQL endpoints make your data interopera-</div><div>ble and available to the world. SPARQL queries can be executed across multi-</div><div>ple endpoints. We have developed a Vec2SPARQL, which is a general frame-</div><div>work for integrating structured data and their vector space representations.</div><div>Vec2SPARQL allows jointly querying vector functions such as computing sim-</div><div>ilarities (cosine, correlations) or classifications with machine learning models</div><div>within a single SPARQL query. We demonstrate applications of our approach</div><div>for biomedical and clinical use cases. Our source code is freely available at</div><div>https://github.com/bio-ontology-research-group/vec2sparql and we make a</div><div>Vec2SPARQL endpoint available at http://sparql.bio2vec.net/</div

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Enhancing Breast Cancer Prediction Using Unlabeled Data

    Get PDF
    Selles vĂ€itekirjas esitatakse sildistamata andmeid kasutav sĂŒvaĂ”ppe lĂ€henemine rinna infiltratiivse duktaalse kartsinoomi koeregioonide automaatseks klassifitseerimiseks rinnavĂ€hi patoloogilistes digipreparaatides. SĂŒvaĂ”ppe meetodite tööpĂ”himĂ”te on sarnane inimajule, mis töötab samuti mitmetel tĂ”lgendustasanditel. Need meetodid on osutunud tulemuslikeks ka vĂ€ga keerukate probleemide nagu pildiliigituse ja esemetuvastuse lahendamisel, ĂŒletades seejuures varasemate lahendusviiside efektiivsust. SĂŒvaĂ”ppeks on aga vaja suurt hulka sildistatud andmeid, mida vĂ”ib olla keeruline saada, eriti veel meditsiinis, kuna nii haiglad kui ka patsiendid ei pruugi olla nĂ”us sedavĂ”rd delikaatset teavet loovutama. Lisaks sellele on masinĂ”ppesĂŒsteemide saavutatavate aina paremate tulemuste hinnaks nende sĂŒsteemide sisemise keerukuse kasv. Selle sisemise keerukuse tĂ”ttu muutub raskemaks ka nende sĂŒsteemide töö mĂ”istmine, mistĂ”ttu kasutajad ei kipu neid usaldama. Meditsiinilisi diagnoose ei saa jĂ€rgida pimesi, kuna see vĂ”ib endaga kaasa tuua patsiendi tervise kahjustamise. Mudeli mĂ”istetavuse tagamine on seega oluline viis sĂŒsteemi usaldatavuse tĂ”stmiseks, eriti just masinĂ”ppel pĂ”hinevate mudelite laialdasel rakendamisel sellistel kriitilise tĂ€htsusega aladel nagu seda on meditsiin. Infiltratiivne duktaalne kartsinoom on ĂŒks levinumaid ja ka agressiivsemaid rinnavĂ€hi vorme, moodustades peaaegu 80% kĂ”igist juhtumitest. Selle diagnoosimine on patoloogidele vĂ€ga keerukas ja ajakulukas ĂŒlesanne, kuna nĂ”uab vĂ”imalike pahaloomuliste kasvajate avastamiseks paljude healoomuliste piirkondade uurimist. Samas on infiltratiivse duktaalse kartsinoomi digipatoloogias tĂ€pne piiritlemine vĂ€hi agressiivsuse hindamise aspektist ĂŒlimalt oluline. KĂ€esolevas uurimuses kasutatakse konvolutsioonilist nĂ€rvivĂ”rku arendamaks vĂ€lja infiltratiivse duktaalse kartsinoomi diagnoosimisel rakendatav pooleldi juhitud Ă”ppe skeem. VĂ€lja pakutud raamistik suurendab esmalt vĂ€ikest sildistatud andmete hulka generatiivse vĂ”istlusliku vĂ”rgu loodud sĂŒnteetiliste meditsiiniliste kujutistega. SeejĂ€rel kasutatakse juba eelnevalt treenitud vĂ”rku, et selle suurendatud andmekogumi peal lĂ€bi viia kujutuvastus, misjĂ€rel sildistamata andmed sildistatakse andmesildistusalgoritmiga. Töötluse tulemusena saadud sildistatud andmeid eelmainitud konvolutsioonilisse nĂ€rvivĂ”rku sisestades saavutatakse rahuldav tulemus: ROC kĂ”vera alla jÀÀv pindala ja F1 skoor on vastavalt 0.86 ja 0.77. Lisaks sellele vĂ”imaldavad vĂ€lja pakutud mĂ”istetavuse tĂ”stmise tehnikad nĂ€ha ka meditsiinilistele prognooside otsuse tegemise protsessi seletust, mis omakorda teeb sĂŒsteemi usaldamise kasutajatele lihtsamaks. KĂ€esolev uurimus nĂ€itab, et konvolutsioonilise nĂ€rvivĂ”rgu tehtud otsuseid aitab paremini mĂ”ista see, kui kasutajatele visualiseeritakse konkreetse juhtumi puhul infiltratiivse duktaalse kartsinoomi positiivse vĂ”i negatiivse otsuse langetamisel sĂŒsteemi jaoks kĂ”ige olulisemaks osutunud piirkondi.The following thesis presents a deep learning (DL) approach for automatic classification of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BC) using unlabeled data. DL methods are similar to the way the human brain works across different interpretation levels. These techniques have shown to outperform traditional approaches of the most complex problems such as image classification and object detection. However, DL requires a broad set of labeled data that is difficult to obtain, especially in the medical field as neither the hospitals nor the patients are willing to reveal such sensitive information. Moreover, machine learning (ML) systems are achieving better performance at the cost of becoming increasingly complex. Because of that, they become less interpretable that causes distrust from the users. Model interpretability is a way to enhance trust in a system. It is a very desirable property, especially crucial with the pervasive adoption of ML-based models in the critical domains like the medical field. With medical diagnostics, the predictions cannot be blindly followed as it may result in harm to the patient. IDC is one of the most common and aggressive subtypes of all breast cancers accounting nearly 80% of them. Assessment of the disease is a very time-consuming and challenging task for pathologists, as it involves scanning large swatches of benign regions to identify an area of malignancy. Meanwhile, accurate delineation of IDC in WSI is crucial for the estimation of grading cancer aggressiveness. In the following study, a semi-supervised learning (SSL) scheme is developed using the deep convolutional neural network (CNN) for IDC diagnosis. The proposed framework first augments a small set of labeled data with synthetic medical images, generated by the generative adversarial network (GAN) that is followed by feature extraction using already pre-trained network on the larger dataset and a data labeling algorithm that labels a much broader set of unlabeled data. After feeding the newly labeled set into the proposed CNN model, acceptable performance is achieved: the AUC and the F-measure accounting for 0.86, 0.77, respectively. Moreover, proposed interpretability techniques produce explanations for medical predictions and build trust in the presented CNN. The following study demonstrates that it is possible to enable a better understanding of the CNN decisions by visualizing areas that are the most important for a particular prediction and by finding elements that are the reasons for IDC, Non-IDC decisions made by the network

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Full text link
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    Medical image retrieval for augmenting diagnostic radiology

    Get PDF
    Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images. We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek. For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images

    Deep Learning in Chest Radiography: From Report Labeling to Image Classification

    Get PDF
    Chest X-ray (CXR) is the most common examination performed by a radiologist. Through CXR, radiologists must correctly and immediately diagnose a patient’s thorax to avoid the progression of life-threatening diseases. Not only are certified radiologists hard to find but also stress, fatigue, and lack of experience all contribute to the quality of an examination. As a result, providing a technique to aid radiologists in reading CXRs and a tool to help bridge the gap for communities without adequate access to radiological services would yield a huge advantage for patients and patient care. This thesis considers one essential task, CXR image classification, with Deep Learning (DL) technologies from the following three aspects: understanding the intersection of CXR interpretation and DL; extracting multiple image labels from radiology reports to facilitate the training of DL classifiers; and developing CXR classifiers using DL. First, we explain the core concepts and categorize the existing data and literature for researchers entering this field for ease of reference. Using CXRs and DL for medical image diagnosis is a relatively recent field of study because large, publicly available CXR datasets have not been around for very long. Second, we contribute to labeling large datasets with multi-label image annotations extracted from CXR reports. We describe the development of a DL-based report labeler named CXRlabeler, focusing on inductive sequential transfer learning. Lastly, we explain the design of three novel Convolutional Neural Network (CNN) classifiers, i.e., MultiViewModel, Xclassifier, and CovidXrayNet, for binary image classification, multi-label image classification, and multi-class image classification, respectively. This dissertation showcases significant progress in the field of automated CXR interpretation using DL; all source code used is publicly available. It provides methods and insights that can be applied to other medical image interpretation tasks
    • 

    corecore