271 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Automatic methods for detection of midline brain abnormalities

    Get PDF
    En col·laboració amb la Universitat de Barcelona (UB) i la Universitat Rovira i Virgili (URV)Different studies have demonstrated an augmented prevalence of different midline brain abnormalities in patients with both mood and psychotic disorders. One of these abnormalities is the cavum septum pellucidum (CSP), which occurs when the septum pellucidum fails to fuse. The study of these abnormalities in Magnetic Resonance Imaging requires a tedious and time-consuming process of manually analyzing all the images in order to detect them. It is also problematic when the same abnormality is analyzed manually by different experts because different criteria can be applied. In this context, in would be useful to develop an automatic method for localizing the abnormality and give the measure of its depth. In this project, several methods have been developed by using machine learning and deep learning techniques. To test the proposed approaches, data from of 861 subjects from FIDMAG, have been gathered and processed to use with our algorithms. Among the subjects, 639 are patients with mood or psychotic disorders and 223 are healthy controls. This same dataset was previously used in a study where the authors analyzed the prevalence of this abnormality across the subjects manually, the depth of the abnormality previously reported is finally compared with the results obtained from our automatic methods. The first proposed method implemented is an unsupervised algorithm that segments the abnormality over 2D slices with an accuracy of the 89.72% and a sensitivity of an 84%. Then, a comparison among different Machine Learning classifiers have been conducted along with different dimensionality reduction methods. Between them, K Nearest Neighbors produced the best overall accuracy, with a 99.1% of accuracy and a sensitivity of a 99% by using HOG descriptors to reduce the dimensionality of the data. The last group of methods are different Deep Learning architectures. Three different implementations have been designed, and over the best performing architecture, two different input options to take into account contextual information have been proposed. The best Deep Learning method reached an accuracy of 98,8% and a specificity and sensitivity of a 99%. Finally, an analysis of the similarity with respect to the results with the manually handcrafted ground-truth have been conducted to validate the usability of the automatic method in further analysis of the abnormality

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure

    A review on methods to estimate a CT from MRI data in the context of MRI-alone RT

    Get PDF
    Background: In recent years, Radiation Therapy (RT) has undergone many developments and provided progress in the field of cancer treatment. However, dose optimisation each treatment session puts the patient at risk of successive X-Ray exposure from Computed Tomography CT scans since this imaging modality is the reference for dose planning. Add to this difficulties related to contour propagation. Thus, approaches are focusing on the use of MRI as the only modality in RT. In this paper, we review methods for creating pseudo-CT images from MRI data for MRI-alone RT. Each class of methods is explained and underlying works are presented in detail with performance results. We discuss the advantages and limitations of each class. Methods: We classified recent works in deriving a pseudo-CT from MR images into four classes: segmentation-based, intensity-based, atlas-based and hybrid methods and the classification was based on considering the general technique applied. Results: Most research focused on the brain and the pelvic regions. The mean absolute error ranged from 80 to 137 HU and from 36.4 to 74 HU for the brain and pelvis, respectively. In addition, an interest in the Dixon MR sequence is increasing since it has the advantage of producing multiple contrast images with a single acquisition. Conclusion: Radiation therapy is emerging towards the generalisation of MRI-only RT thanks to the advances in techniques for generation of pseudo-CT images and the development of specialised MR sequences favouring bone visualisation. However, a benchmark needs to be established to set in common performance metrics to assess the quality of the generated pseudo-CT and judge on the efficiency of a certain method

    Multiple Instance Learning: A Survey of Problem Characteristics and Applications

    Full text link
    Multiple instance learning (MIL) is a form of weakly supervised learning where training instances are arranged in sets, called bags, and a label is provided for the entire bag. This formulation is gaining interest because it naturally fits various problems and allows to leverage weakly labeled data. Consequently, it has been used in diverse application fields such as computer vision and document classification. However, learning from bags raises important challenges that are unique to MIL. This paper provides a comprehensive survey of the characteristics which define and differentiate the types of MIL problems. Until now, these problem characteristics have not been formally identified and described. As a result, the variations in performance of MIL algorithms from one data set to another are difficult to explain. In this paper, MIL problem characteristics are grouped into four broad categories: the composition of the bags, the types of data distribution, the ambiguity of instance labels, and the task to be performed. Methods specialized to address each category are reviewed. Then, the extent to which these characteristics manifest themselves in key MIL application areas are described. Finally, experiments are conducted to compare the performance of 16 state-of-the-art MIL methods on selected problem characteristics. This paper provides insight on how the problem characteristics affect MIL algorithms, recommendations for future benchmarking and promising avenues for research

    Adaptive Feature Engineering Modeling for Ultrasound Image Classification for Decision Support

    Get PDF
    Ultrasonography is considered a relatively safe option for the diagnosis of benign and malignant cancer lesions due to the low-energy sound waves used. However, the visual interpretation of the ultrasound images is time-consuming and usually has high false alerts due to speckle noise. Improved methods of collection image-based data have been proposed to reduce noise in the images; however, this has proved not to solve the problem due to the complex nature of images and the exponential growth of biomedical datasets. Secondly, the target class in real-world biomedical datasets, that is the focus of interest of a biopsy, is usually significantly underrepresented compared to the non-target class. This makes it difficult to train standard classification models like Support Vector Machine (SVM), Decision Trees, and Nearest Neighbor techniques on biomedical datasets because they assume an equal class distribution or an equal misclassification cost. Resampling techniques by either oversampling the minority class or under-sampling the majority class have been proposed to mitigate the class imbalance problem but with minimal success. We propose a method of resolving the class imbalance problem with the design of a novel data-adaptive feature engineering model for extracting, selecting, and transforming textural features into a feature space that is inherently relevant to the application domain. We hypothesize that by maximizing the variance and preserving as much variability in well-engineered features prior to applying a classifier model will boost the differentiation of the thyroid nodules (benign or malignant) through effective model building. Our proposed a hybrid approach of applying Regression and Rule-Based techniques to build our Feature Engineering and a Bayesian Classifier respectively. In the Feature Engineering model, we transformed images pixel intensity values into a high dimensional structured dataset and fitting a regression analysis model to estimate relevant kernel parameters to be applied to the proposed filter method. We adopted an Elastic Net Regularization path to control the maximum log-likelihood estimation of the Regression model. Finally, we applied a Bayesian network inference to estimate a subset for the textural features with a significant conditional dependency in the classification of the thyroid lesion. This is performed to establish the conditional influence on the textural feature to the random factors generated through our feature engineering model and to evaluate the success criterion of our approach. The proposed approach was tested and evaluated on a public dataset obtained from thyroid cancer ultrasound diagnostic data. The analyses of the results showed that the classification performance had a significant improvement overall for accuracy and area under the curve when then proposed feature engineering model was applied to the data. We show that a high performance of 96.00% accuracy with a sensitivity and specificity of 99.64%) and 90.23% respectively was achieved for a filter size of 13 × 13

    Prostate cancer detection using deep learning

    Get PDF
    Cancer detection is one of the principal topics of research in medical science. May it be breast, lung, brain or prostate cancer, advances are being made to improve detection precision and time. Research is being carried out on broad range of procedures at different stages of cancer to understand it better. Prostate cancer, in particular, has seen some novel approaches of detection using both magnetic resonance imaging (MRI) and histopathology data. The approaches include detection using deep neural networks, deep convolutional neural networks in particular because of their human level precision in image recognition task. In this thesis, we analysed a dataset containing multiparametric magnetic resonance imaging (mpMRI) prostate scans. The objective of the research was Gleason grade group classification, through mpMRI scans, which has not been attempted before on a small dataset. We first trained several conventional machine learning algorithms on handcrafted features from the dataset to predict the Gleason grade group of the cases. After that the dataset was augmented using two different augmentation techniques for further experimentation with deep convolutional neural networks. Convolutional neural network of varying depth were used to understand the effects of network depth on classification accuracy. Furthermore, we made an attempt to shed light on the pitfalls of using small dataset for solving problems of such nature
    • …
    corecore