893 research outputs found

    Denoising Adversarial Autoencoders: Classifying Skin Lesions Using Limited Labelled Training Data

    Full text link
    We propose a novel deep learning model for classifying medical images in the setting where there is a large amount of unlabelled medical data available, but labelled data is in limited supply. We consider the specific case of classifying skin lesions as either malignant or benign. In this setting, the proposed approach -- the semi-supervised, denoising adversarial autoencoder -- is able to utilise vast amounts of unlabelled data to learn a representation for skin lesions, and small amounts of labelled data to assign class labels based on the learned representation. We analyse the contributions of both the adversarial and denoising components of the model and find that the combination yields superior classification performance in the setting of limited labelled training data.Comment: Under consideration for the IET Computer Vision Journal special issue on "Computer Vision in Cancer Data Analysis

    Medical image synthesis using generative adversarial networks: towards photo-realistic image synthesis

    Full text link
    This proposed work addresses the photo-realism for synthetic images. We introduced a modified generative adversarial network: StencilGAN. It is a perceptually-aware generative adversarial network that synthesizes images based on overlaid labelled masks. This technique can be a prominent solution for the scarcity of the resources in the healthcare sector

    An interactive evolution strategy based deep convolutional generative adversarial network for 2D video game level procedural content generation.

    Get PDF
    The generation of desirable video game contents has been a challenge of games level design and production. In this research, we propose a game player flow experience driven interactive latent variable evolution strategy incorporated with a Deep Convolutional Generative Adversarial Network (DCGAN) for undertaking game content generation with respect to a 2D Super Mario video game. Since the Generative Adversarial Network (GAN) models tend to capture the high-level style of the input images by learning the latent vectors, they are used to generate game scenarios and context images in this research. However, as GANs employ arbitrary inputs for game image generation without taking specific features into account, they generate game level images in an incoherent manner without the specific playable game level properties, such as a broken pipe in the Mario game level image. In order to overcome such drawbacks, we propose a game player flow experience driven optimised mechanism with human intervention, to guide the game level content generation process so that only plausible and even enjoyable images will be generated as the candidates for the final game design and production

    Integrating Narcissus-derived galanthamine production into traditional upland farming systems

    Get PDF
    Alzheimer’s disease (AD) is a disorder associated with progressive degeneration of memory and cognitive function. Galantamine is a licenced treatment for AD but supplies of the plant alkaloid that it is produced from, galanthamine, are limited. This three-year system study tested the potential to combine Narcissus-derived galanthamine production with grassland-based ruminant production. Replicate plots of permanent pasture were prepared with and without bulbs of Narcissus pseudonarcissus sown as lines into the sward. Two different fertiliser regimes were imposed. The above-ground green biomass of N. pseudonarcissus was harvested in early spring and the galanthamine yield determined. In the second harvest year a split-plot design was implemented with lines of N. pseudonarcissus cut annually and biennially. All plots were subsequently grazed by ewes and lambs and animal performance recorded. Incorporation of N. pseudonarcissus into grazed permanent pasture had no detrimental effects on the health or performance of the sheep which subsequently grazed the pasture. There was no consistency to the effects of fertiliser rates on galanthamine yields. There was no difference in overall galanthamine yield if N. pseudonarcissus was cut biennially (1.64 vs. 1.75 kg galanthamine/ha for annual combined vs biennial cuts respectively; s.e.d = 0.117 kg galanthamine/ha; ns). This study verified the feasibility of a dual cropping approach to producing plant-derived galanthamine

    Partial order label decomposition approaches for melanoma diagnosis

    Get PDF
    Melanoma is a type of cancer that develops from the pigment-containing cells known as melanocytes. Usually occurring on the skin, early detection and diagnosis is strongly related to survival rates. Melanoma recognition is a challenging task that nowadays is performed by well trained dermatologists who may produce varying diagnosis due to the task complexity. This motivates the development of automated diagnosis tools, in spite of the inherent difficulties (intra-class variation, visual similarity between melanoma and non-melanoma lesions, among others). In the present work, we propose a system combining image analysis and machine learning to detect melanoma presence and severity. The severity is assessed in terms of melanoma thickness, which is measured by the Breslow index. Previous works mainly focus on the binary problem of detecting the presence of the melanoma. However, the system proposed in this paper goes a step further by also considering the stage of the lesion in the classification task. To do so, we extract 100 features that consider the shape, colour, pigment network and texture of the benign and malignant lesions. The problem is tackled as a five-class classification problem, where the first class represents benign lesions, and the remaining four classes represent the different stages of the melanoma (via the Breslow index). Based on the problem definition, we identify the learning setting as a partial order problem, in which the patterns belonging to the different melanoma stages present an order relationship, but where there is no order arrangement with respect to the benign lesions. Under this assumption about the class topology, we design several proposals to exploit this structure and improve data preprocessing. In this sense, we experimentally demonstrate that those proposals exploiting the partial order assumption achieve better performance than 12 baseline nominal and ordinal classifiers (including a deep learning model) which do not consider this partial order. To deal with class imbalance, we additionally propose specific over-sampling techniques that consider the structure of the problem for the creation of synthetic patterns. The experimental study is carried out with clinician-curated images from the Interactive Atlas of Dermoscopy, which eases reproducibility of experiments. Concerning the results obtained, in spite of having augmented the complexity of the classification problem with more classes, the performance of our proposals in the binary problem is similar to the one reported in the literature

    Computational Methods for Image Acquisition and Analysis with Applications in Optical Coherence Tomography

    Get PDF
    The computational approach to image acquisition and analysis plays an important role in medical imaging and optical coherence tomography (OCT). This thesis is dedicated to the development and evaluation of algorithmic solutions for better image acquisition and analysis with a focus on OCT retinal imaging. For image acquisition, we first developed, implemented, and systematically evaluated a compressive sensing approach for image/signal acquisition for single-pixel camera architectures and an OCT system. Our evaluation outcome provides a detailed insight into implementing compressive data acquisition of those imaging systems. We further proposed a convolutional neural network model, LSHR-Net, as the first deep-learning imaging solution for the single-pixel camera. This method can achieve better accuracy, hardware-efficient image acquisition and reconstruction than the conventional compressive sensing algorithm. Three image analysis methods were proposed to achieve retinal OCT image analysis with high accuracy and robustness. We first proposed a framework for healthy retinal layer segmentation. Our framework consists of several image processing algorithms specifically aimed at segmenting a total of 12 thin retinal cell layers, outperforming other segmentation methods. Furthermore, we proposed two deep-learning-based models to segment retinal oedema lesions in OCT images, with particular attention on processing small-scale datasets. The first model leverages transfer learning to implement oedema segmentation and achieves better accuracy than comparable methods. Based on the meta-learning concept, a second model was designed to be a solution for general medical image segmentation. The results of this work indicate that our model can be applied to retinal OCT images and other small-scale medical image data, such as skin cancer, demonstrated in this thesis

    Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

    Get PDF
    While medical imaging and general pathology are routine in cancer diagnosis, genetic sequencing is not always assessable due to the strong phenotypic and genetic heterogeneity of human cancers. Image-genomics integrates medical imaging and genetics to provide a complementary approach to optimise cancer diagnosis by associating tumour imaging traits with clinical data and has demonstrated its potential in identifying imaging surrogates for tumour biomarkers. However, existing image-genomics research has focused on quantifying tumour visual traits according to human understanding, which may not be optimal across different cancer types. The challenge hence lies in the extraction of optimised imaging representations in an objective data-driven manner. Such an approach requires large volumes of annotated image data that are difficult to acquire. We propose a deep domain adaptation learning framework for associating image features to tumour genetic information, exploiting the ability of domain adaptation technique to learn relevant image features from close knowledge domains. Our proposed framework leverages the current state-of-the-art in image object recognition to provide image features to encode subtle variations of tumour phenotypic characteristics with domain adaptation techniques. The proposed framework was evaluated with current state-of-the-art in: (i) tumour histopathology image classification and; (ii) image-genomics associations. The proposed framework demonstrated improved accuracy of tumour classification, as well as providing additional data-derived representations of tumour phenotypic characteristics that exhibit strong image-genomics association. This thesis advances and indicates the potential of image-genomics research to reveal additional imaging surrogates to genetic biomarkers, which has the potential to facilitate cancer diagnosis

    How well do self-supervised models transfer to medical imaging?

    Get PDF
    Self-supervised learning approaches have seen success transferring between similar medical imaging datasets, however there has been no large scale attempt to compare the transferability of self-supervised models against each other on medical images. In this study, we compare the generalisability of seven self-supervised models, two of which were trained in-domain, against supervised baselines across eight different medical datasets. We find that ImageNet pretrained self-supervised models are more generalisable than their supervised counterparts, scoring up to 10% better on medical classification tasks. The two in-domain pretrained models outperformed other models by over 20% on in-domain tasks, however they suffered significant loss of accuracy on all other tasks. Our investigation of the feature representations suggests that this trend may be due to the models learning to focus too heavily on specific areas
    • …
    corecore