81 research outputs found

    Inteligência Artificial em Radiologia: Do Processamento de Imagem ao Diagnóstico

    Get PDF
    The objective of this article is to present a view on the potential impact of Artificial Intelligence (AI) on processing medical images, in particular in relation to diagnostic. This topic is currently attracting major attention in both the medical and engineering communities, as demonstrated by the number of recent tutorials [1-3] and review articles [4-6] that address it, with large research hospitals, as well as engineering research centers contributing to the area. Furthermore, several large companies like General Electric (GE), IBM/Merge, Siemens, Philips or Agfa, as well as more specialized companies and startups are integrating AI into their medical imaging products. The evolution of GE in this respect is interesting. GE SmartSignal software was developed for industrial applications to identify impending equipment failures well before they happen. As written in the GE prospectus, with this added lead time, one can transform from reactive maintenance to a more proactive maintenance process, allowing the workforce to focus on fixing problems rather than looking for them. With this background experience from the industrial field, GE developed predictive analytics products for clinical imaging, that embodied the Predictive component of P4 medicine (predictive, personalized, preventive, participatory). Another interesting example is the Illumeo software from Philips that embeds adaptive intelligence, i. e. the capacity to improve its automatic reasoning process from its past experience, to automatically pop out related prior exams for radiology in face of a concrete situation. Actually, with its capacity to tackle massive amounts of data of different sorts (imaging data, patient exam reports, pathology reports, patient monitoring signals, data from implantable electrophysiology devices, and data from many other sources) AI is certainly able to yield a decisive contribution to all the components of P4 medicine. For instance, in the presence of a rare disease, AI methods have the capacity to review huge amounts of prior information when confronted to the patient clinical data

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Radon Projections as Image Descriptors for Content-Based Retrieval of Medical Images

    Get PDF
    Clinical analysis and medical diagnosis of diverse diseases adopt medical imaging techniques to empower specialists to perform their tasks by visualizing internal body organs and tissues for classifying and treating diseases at an early stage. Content-Based Image Retrieval (CBIR) systems are a set of computer vision techniques to retrieve similar images from a large database based on proper image representations. Particularly in radiology and histopathology, CBIR is a promising approach to effectively screen, understand, and retrieve images with similar level of semantic descriptions from a database of previously diagnosed cases to provide physicians with reliable assistance for diagnosis, treatment planning and research. Over the past decade, the development of CBIR systems in medical imaging has expedited due to the increase in digitized modalities, an increase in computational efficiency (e.g., availability of GPUs), and progress in algorithm development in computer vision and artificial intelligence. Hence, medical specialists may use CBIR prototypes to query similar cases from a large image database based solely on the image content (and no text). Understanding the semantics of an image requires an expressive descriptor that has the ability to capture and to represent unique and invariant features of an image. Radon transform, one of the oldest techniques widely used in medical imaging, can capture the shape of organs in form of a one-dimensional histogram by projecting parallel rays through a two-dimensional object of concern at a specific angle. In this work, the Radon transform is re-designed to (i) extract features and (ii) generate a descriptor for content-based retrieval of medical images. Radon transform is applied to feed a deep neural network instead of raw images in order to improve the generalization of the network. Specifically, the framework is composed of providing Radon projections of an image to a deep autoencoder, from which the deepest layer is isolated and fed into a multi-layer perceptron for classification. This approach enables the network to (a) train much faster as the Radon projections are computationally inexpensive compared to raw input images, and (b) perform more accurately as Radon projections can make more pronounced and salient features to the network compared to raw images. This framework is validated on a publicly available radiography data set called "Image Retrieval in Medical Applications" (IRMA), consisting of 12,677 train and 1,733 test images, for which an classification accuracy of approximately 82% is achieved, outperforming all autoencoder strategies reported on the Image Retrieval in Medical Applications (IRMA) dataset. The classification accuracy is calculated by dividing the total IRMA error, a calculation outlined by the authors of the data set, with the total number of test images. Finally, a compact handcrafted image descriptor based on Radon transform was designed in this work that is called "Forming Local Intersections of Projections" (FLIP). The FLIP descriptor has been designed, through numerous experiments, for representing histopathology images. The FLIP descriptor is based on Radon transform wherein parallel projections are applied in a local 3x3 neighborhoods with 2 pixel overlap of gray-level images (staining of histopathology images is ignored). Using four equidistant projection directions in each window, the characteristics of the neighborhood is quantified by taking an element-wise minimum between each adjacent projection in each window. Thereafter, the FLIP histogram (descriptor) for each image is constructed. A multi-resolution FLIP (mFLIP) scheme is also proposed which is observed to outperform many state-of-the-art methods, among others deep features, when applied on the histopathology data set KIMIA Path24. Experiments show a total classification accuracy of approximately 72% using SVM classification, which surpasses the current benchmark of approximately 66% on the KIMIA Path24 data set

    An Efficient Gabor Walsh-Hadamard Transform Based Approach for Retrieving Brain Tumor Images from MRI

    Get PDF
    Brain tumors are a serious and death-defying disease for human life. Discovering an appropriate brain tumor image from a magnetic resonance imaging (MRI) archive is a challenging job for the radiologist. Most search engines retrieve images on the basis of traditional text-based approaches. The main challenge in the MRI image analysis is that low-level visual information captured by the MRI machine and the high-level information identified by the assessor. This semantic gap is addressed in this study by designing a new feature extraction technique. In this paper, we introduce Content-Based Medical Image retrieval (CBMIR) system for retrieval of brain tumor images from the large data. Firstly, we remove noise from MRI images employing several filtering techniques. Afterward, we design a feature extraction scheme combining Gabor filtering technique (which is mainly focused on specific frequency content at the image region) and Walsh-Hadamard transform (WHT) (conquer technique for easy configuration of image) for discovering representative features from MRI images. After that, for retrieving the accurate and reliable image, we employ Fuzzy C-Means clustering Minkowski distance metric that can evaluate the similarity between the query image and database images. The proposed methodology design was tested on a publicly available brain tumor MRI image database. The experimental results demonstrate that our proposed approach outperforms most of the existing techniques like Gabor, wavelet, and Hough transform in detecting brain tumors and also take less time. The proposed approach will be beneficial for radiologists and also for technologists to build an automatic decision support system that will produce reproducible and objective results with high accuracy

    Content-Based Image Retrieval: A Comprehensive User Interactive Simulation Tool for Endoscopic Image Databases

    Get PDF
    Until few years ago, radiological methods were widely used for the examination and investigation of the digestive tract. Today, wireless capsule endoscopy represents an innovative, noninvasive, effective solution that does not imply a risk of irradiation. Due to the impressive number of images captured on the entire “trip” covered by the video capsule, diagnostic accuracy is greatly improved, also allowing the visualization of certain areas of the digestive tract that were previously inaccessible. Captured images can be analyzed by a specialist who can identify lesions or possible active bleeding within the digestive tract. This paper presents the implementation of a recovery system for endoscopic images based on Content-Based Image Retrieval (CBIR) technique
    corecore