4 research outputs found

    Classifier-guided multi-style tile image generation method

    No full text
    Image generative models for ceramic tile design lack style diversity and controllability of high-quality generated styles. It is difficult to find a series of ceramic tiles with the same texture but distinct styles, that makes it a challenging for users to select from a limited number of tiles with a single style. Although, Generative Adversarial Networks (GANs) can slightly increase the style diversity of tile images, the style controllability remains very weak. Additionally, concatenating generated tile image blocks to obtain a larger texture region can easily result in seams at the boundaries that decrease image quality. In this paper, we propose a style transfer method for ceramic tiles texture generation that combines a classifier-guided StyleGAN with AdaIN-GAN to overcome the above limitations. Firstly, we introduce a new conditional classifier-guided module into the StyleGAN. With the guidance of the input condition vector, the output image is made to have the tile style characteristics that match the vector. At the same time, the fusion of the condition vectors realizes the style gradient effect of the tile image to expand the style diversity. Secondly, we use the AdaIN-GAN to color the original texture in tile style. The style images generated by StyleGAN are then used as a dataset for model training to enhance the generalization ability of the model and achieve a style transfer effect with fixed texture features but significantly diverse styles. Finally, a linear weighted image stitching method is adopted, which uses an adaptive kernel linear weighted matrix to cover and splice arbitrary seams with image blocks, thereby successfully eliminating seams and enhancing image continuity. When this method is applied to high-resolution tile image generation, the method still maintains higher continuity and clearer image quality. Extensive experiments and human evaluation confirm the superior performance of the proposed method compared with other SOTA methods. The experimental results also verify that the new tile images generated by the proposed algorithm have diverse styles and meet the design requirements for tile style diversity

    Multi-model Ensemble Learning Architecture Based on 3D CNN for Lung Nodule Malignancy Suspiciousness Classification

    No full text
    © 2020, Society for Imaging Informatics in Medicine. Classification of benign and malignant in lung nodules using chest CT images is a key step in the diagnosis of early-stage lung cancer, as well as an effective way to improve the patients’ survival rate. However, due to the diversity of lung nodules and the visual similarity of lung nodules to their surrounding tissues, it is difficult to construct a robust classification model with conventional deep learning–based diagnostic methods. To address this problem, we propose a multi-model ensemble learning architecture based on 3D convolutional neural network (MMEL-3DCNN). This approach incorporates three key ideas: (1) Constructed multi-model network architecture can be well adapted to the heterogeneity of lung nodules. (2) The input that concatenated of the intensity image corresponding to the nodule mask, the original image, and the enhanced image corresponding to which can help training model to extract advanced feature with more discriminative capacity. (3) Select the corresponding model to different nodule size dynamically for prediction, which can improve the generalization ability of the model effectively. In addition, ensemble learning is applied in this paper to further improve the robustness of the nodule classification model. The proposed method has been experimentally verified on the public dataset, LIDC-IDRI. The experimental results show that the proposed MMEL-3DCNN architecture can obtain satisfactory classification results
    corecore