2 research outputs found

    Preliminary Result: AI-Generated Neutrophil Image using Deep Convolution GAN for Data Augmentation.

    Get PDF
    GANs (Generative Adversarial Networks) have grown impressively, producing significant photorealistic visuals that imitate the content of datasets they were trained to replicate. A GAN is essentially two neural networks that feed into one other. One generates increasingly accurate data, while the other increases its capacity to classify such data over time. One recurring subject in medical imaging is whether GANs can be as effective at producing usable medical data as they are at producing realistic images. The deep learning model is data-hungry in nature, it requires a lot of example images to train well. However, due to the lack of medical images, data augmentation comes in handy to generate extra medical images using GAN. This paper aims to generate microscopic peripheral blood cell images, specifically neutrophils, as a form of data augmentation to optimize haematological diagnosis. To accomplish this, we developed a Deep Convolution GAN (DCGAN) and trained with 3329 neutrophil images. For the preliminary result, we will present our work on the impact of different learning rates and optimizers of DCGAN on the generated images and training losses. The quality of the generated images is far from perfect from the dataset we want to imitate, also the convergence of the model is slow and not stable. Yet, there were reasonably generated images during the training where the model has a rough idea about the neutrophil structure

    INPAINTING OF DENTAL �PANORAMIC TOMOGRAPHY �VIA DEEP LEARNING METHOD

    Get PDF
    The tradition of image inpainting has existed for a long time; it is used to correct old and corrupted images. In recent times, progress in deep learning allows artificial neural networks to perform inpainting on clinical images to reduce image artifacts. In this paper, we demonstrated how various neural network models could perform inpainting on a dental panoramic tomography that was taken by using cone-beam computed tomography (CBCT). Experiments were done to compare the output of three different artificial neural network models: shallow convolutional autoencoder, deep convolutional autoencoder, and U-Net architecture. The dataset was taken from an open online dataset provided by Noor Medical Imaging Center. Qualitative assessment of the output shows that the U-net model reproduces the best output images with minimal blurriness. This result is also supported by the quantitative measurement, which shows that the U-net model has the smallest mean squared root error and the highest structural similarity index measure. The experiment results give an early indication that it is feasible to use U-Net to fix and reduce any image artifact that occurs in dental panoramic tomography
    corecore