142,904 research outputs found

    ASCII Art Classification Model by Transfer Learning and Data Augmentation

    Get PDF
    In this study, we propose an ASCII art category classification method based on transfer learning and data augmentation. ASCII art is a form of nonverbal expression that visually expresses emotions and intentions. While there are similar expressions such as emoticons and pictograms, most are either represented by a single character or are embedded in the statement as an inline expression. ASCII art is expressed in various styles, including dot art illustration and line art illustration. Basically, ASCII art can represent almost any object, and therefore the category of ASCII art is very diverse. Many existing image classification algorithms use color information; however, since most ASCII art is written in character sets, there is no color information available for categorization. We created an ASCII art category classifier using the grayscale edge image and the ASCII art image transformed from the image as a training image set. We also used VGG16, ResNet-50, Inception v3, and Xception’s pre-trained networks to fine-tune our categorization. As a result of the experiment of fine tuning by VGG16 and data augmentation, an accuracy rate of 80% or more was obtained in the “human” category

    MetaSleepLearner: A Pilot Study on Fast Adaptation of Bio-signals-Based Sleep Stage Classifier to New Individual Subject Using Meta-Learning.

    Get PDF
    Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects. The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4% to 17.7% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording

    Learning Finer-class Networks for Universal Representations

    Full text link
    Many real-world visual recognition use-cases can not directly benefit from state-of-the-art CNN-based approaches because of the lack of many annotated data. The usual approach to deal with this is to transfer a representation pre-learned on a large annotated source-task onto a target-task of interest. This raises the question of how well the original representation is "universal", that is to say directly adapted to many different target-tasks. To improve such universality, the state-of-the-art consists in training networks on a diversified source problem, that is modified either by adding generic or specific categories to the initial set of categories. In this vein, we proposed a method that exploits finer-classes than the most specific ones existing, for which no annotation is available. We rely on unsupervised learning and a bottom-up split and merge strategy. We show that our method learns more universal representations than state-of-the-art, leading to significantly better results on 10 target-tasks from multiple domains, using several network architectures, either alone or combined with networks learned at a coarser semantic level.Comment: British Machine Vision Conference (BMVC) 201
    • …
    corecore