208 research outputs found

    Various generative adversarial networks model for synthetic prohibitory sign image generation

    Get PDF
    A synthetic image is a critical issue for computer vision. Traffic sign images synthesized from standard models are commonly used to build computer recognition algorithms for acquiring more knowledge on various and low-cost research issues. Convolutional Neural Network (CNN) achieves excellent detection and recognition of traffic signs with sufficient annotated training data. The consistency of the entire vision system is dependent on neural networks. However, locating traffic sign datasets from most countries in the world is complicated. This work uses various generative adversarial networks (GAN) models to construct intricate images, such as Least Squares Generative Adversarial Networks (LSGAN), Deep Convolutional Generative Adversarial Networks (DCGAN), and Wasserstein Generative Adversarial Networks (WGAN). This paper also discusses, in particular, the quality of the images produced by various GANs with different parameters. For processing, we use a picture with a specific number and scale. The Structural Similarity Index (SSIM) and Mean Squared Error (MSE) will be used to measure image consistency. Between the generated image and the corresponding real image, the SSIM values will be compared. As a result, the images display a strong similarity to the real image when using more training images. LSGAN outperformed other GAN models in the experiment with maximum SSIM values achieved using 200 images as inputs, 2000 epochs, and size 32 × 32

    Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective

    Get PDF
    This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. Specifically, it categorises the cross-dataset recognition into seventeen problems based on a set of carefully chosen data and label attributes. Such a problem-oriented taxonomy has allowed us to examine how different transfer learning approaches tackle each problem and how well each problem has been researched to date. The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the problems (e.g. eight of the seventeen problems) that have been scarcely studied. This survey not only presents an up-to-date technical review for researchers, but also a systematic approach and a reference for a machine learning practitioner to categorise a real problem and to look up for a possible solution accordingly

    Troika Generative Adversarial Network (T-GAN): A Synthetic Image Generator That Improves Neural Network Training for Handwriting Classification

    Get PDF
    Training an artificial neural network for handwriting classification requires a sufficiently sized annotated dataset in order to avoid overfitting. In the absence of sufficient instances, data augmentation techniques are normally considered. In this paper, we propose the troika generative adversarial network (T-GAN) for data augmentation to address the scarcity of publicly labeled handwriting datasets. T-GAN has three generator subnetworks architectured to have some weight-sharing in order to learn the joint distribution from three specific domains. We used T-GAN to augment the data from a subset of the IAM Handwriting Database. We then compared this with other data augmentation techniques by measuring the improvements brought by each technique to the handwriting classification accuracies in three types of artificial neural networks (ANNs): deep ANN, convolutional neural network (CNN), and deep CNN. The data augmentation technique involving the T-GAN yielded the highest accuracy improvements in each of the three ANN classifier types – outperforming the standard techniques of image rotation, affine transformation, and combination of these two – as well as the technique that uses another GAN-based model, the coupled GAN (CoGAN). Furthermore, a paired t-test between the 10-fold cross-validation results of the T-GAN and CoGAN, the second-best augmentation technique in this study, on a deep CNN-made classifier confirmed the superiority of the data augmentation technique that uses the T-GAN. Finally, when the generated synthetic data instances from the T-GAN were further enhanced using the pepper noise removal and median filter, the classification accuracy of the trained CNN and deep CNN classifiers were further improved to 93.54% and 95.45%, respectively. Each of these is a big improvement from the original accuracies of 67.43% and 68.32%, respectively of the 2 classifiers trained on the original unaugmented dataset. Thus, data augmentation using T-GAN – coupled with the mentioned two image noise removal techniques – can be a preferred pre-training technique for augmenting handwriting datasets with insufficient data samples

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people
    • …
    corecore