67 research outputs found

    Emotion Estimation Method Based on Emoticon Image Features and Distributed Representations of Sentences

    Get PDF
    This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only

    ASCII Art Classification Model by Transfer Learning and Data Augmentation

    Get PDF
    In this study, we propose an ASCII art category classification method based on transfer learning and data augmentation. ASCII art is a form of nonverbal expression that visually expresses emotions and intentions. While there are similar expressions such as emoticons and pictograms, most are either represented by a single character or are embedded in the statement as an inline expression. ASCII art is expressed in various styles, including dot art illustration and line art illustration. Basically, ASCII art can represent almost any object, and therefore the category of ASCII art is very diverse. Many existing image classification algorithms use color information; however, since most ASCII art is written in character sets, there is no color information available for categorization. We created an ASCII art category classifier using the grayscale edge image and the ASCII art image transformed from the image as a training image set. We also used VGG16, ResNet-50, Inception v3, and Xception’s pre-trained networks to fine-tune our categorization. As a result of the experiment of fine tuning by VGG16 and data augmentation, an accuracy rate of 80% or more was obtained in the “human” category

    Preference Analysis Method Applying Relationship between Electroencephalogram Activities and Egogram in Prefrontal Cortex Activities : How to collaborate between engineering techniques and psychology

    Get PDF
    This paper introduces a method of preference analysis based on electroencephalogram (EEG) analysis of prefrontal cortex activity. The proposed method applies the relationship between EEG activity and the Egogram. The EEG senses a single point and records readings by means of a dry-type sensor and a small number of electrodes. The EEG analysis adapts the feature mining and the clustering on EEG patterns using a self-organizing map (SOM). EEG activity of the prefrontal cortex displays individual difference. To take the individual difference into account, we construct a feature vector for input modality of the SOM. The input vector for the SOM consists of the extracted EEG feature vector and a human character vector, which is the human character quantified through the ego analysis using psychological testing. In preprocessing, we extract the EEG feature vector by calculating the time average on each frequency band: θ, low-β, and high-β. To prove the effectiveness of the proposed method, we perform experiments using real EEG data. These results show that the accuracy rate of the EEG pattern classification is higher than it was before the improvement of the input vector

    An Illustration Image Classication Focusing on Infrequent Colors

    Get PDF
    Illustration images used in comics or animation have a style that constitutes an emotional feature. The style is represented by the various elements of the illustration image. There are a few research studies that treat the style of illustration images. In this paper, we tried to model the style by using an image feature, and to classify an illustration image by style. We thought that the color that appeared infrequently would represent the style of the illustration image. Therefore, we proposed the method for creating the color histogram, which emphasizes colors appearing infrequently. We term this \Infrequency histogram; IF-hist." To test the effectiveness of the histogram, we experimented with classifying two styles defined as “For boys" and “For girls" based on the IF-hist. The results of the experiment indicated that, when using the IF-hist, the precision of the classification result of the style “For girls" is 93%. This is 50% higher than when using the existing normal color histogram. Moreover, the precision of the other classification results has improved

    ASCII Art Classification based on Deep Neural Networks Using Image Feature of Characters

    Get PDF
    In recent years, a lot of non-verbal expressions have been used on social media. Ascii art (AA) is an expression using characters with visual technique. In this paper, we set up an experiment to classify AA pictures by using character features and image features. We try to clarify which feature is more effective for a method to classify AA pictures. We proposed four methods: 1) a method based on character frequency, 2) a method based on character importance value and 3) a method based on image features, 4) a method based on image features using pre-trained neural networks and 5) a method based on image features of characters. We trained neural networks by using these five features. In the experimental result, the best classification accuracy was obtained in the feed forward neural networks that used image features of characters

    Emotion Recognition of Emoticons Based on Character Embedding

    Get PDF
    This paper proposes a method for estimating the emotions expressed by emoticons based on a distributed representation of the character meanings of the emoticon. Existing studies on emoticons have focused on extracting the emoticons from texts and estimating the associated emotions by separating them into their constituent parts and using the combination of parts as the feature. Applying a recently developed technique for word embedding, we propose a versatile approach to emotion estimation from emoticons by training the meanings of the characters constituting the emoticons and using them as the feature unit of the emoticon. A cross-validation test was conducted for the proposed model based on deep convolutional neural networks using distributed representations of the characters as the feature. Results showed that our proposed method estimates the emotion of unknown emoticons with a higher F1-score than the baseline method based on character n-grams

    A Lightweight Transmission Parameter Selection Scheme Using Reinforcement Learning for LoRaWAN

    Full text link
    The number of IoT devices is predicted to reach 125 billion by 2023. The growth of IoT devices will intensify the collisions between devices, degrading communication performance. Selecting appropriate transmission parameters, such as channel and spreading factor (SF), can effectively reduce the collisions between long-range (LoRa) devices. However, most of the schemes proposed in the current literature are not easy to implement on an IoT device with limited computational complexity and memory. To solve this issue, we propose a lightweight transmission-parameter selection scheme, i.e., a joint channel and SF selection scheme using reinforcement learning for low-power wide area networking (LoRaWAN). In the proposed scheme, appropriate transmission parameters can be selected by simple four arithmetic operations using only Acknowledge (ACK) information. Additionally, we theoretically analyze the computational complexity and memory requirement of our proposed scheme, which verified that our proposed scheme could select transmission parameters with extremely low computational complexity and memory requirement. Moreover, a large number of experiments were implemented on the LoRa devices in the real world to evaluate the effectiveness of our proposed scheme. The experimental results demonstrate the following main phenomena. (1) Compared to other lightweight transmission-parameter selection schemes, collisions between LoRa devices can be efficiently avoided by our proposed scheme in LoRaWAN irrespective of changes in the available channels. (2) The frame success rate (FSR) can be improved by selecting access channels and using SFs as opposed to only selecting access channels. (3) Since interference exists between adjacent channels, FSR and fairness can be improved by increasing the interval of adjacent available channels.Comment: 14 pages, 12 figures, 8 tables. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Method to Classify Matching Patterns between Music and Human’s Mood Using EEG Analysis Technique Considering Personality

    Get PDF
    In this paper we introduce a method to classify matching patterns between music and human mood using an electroencephalogram (EEG) analysis technique and considering personality. We analyse the EEG of the left prefrontal cortex by single-point sensing. The EEG recording device uses dry-type sensors. The feature vector is created by connecting the personality quantification results and the EEG features. Egograms—the Yatabe-Guilford personality inventory and a Kretschmer-type personality inventory are used to quantify personality. The EEG features are extracted using fast Fourier transform. Then, the matching patterns are classified using the k -nearest neighbour method. To show the effectiveness of the proposed method, we conduct experiments using real EEG data

    Preference Classification Method Using EEG Analysis Based on Gray Theory and Personality Analysis

    Get PDF
    This paper introduces a method to classify the preference patterns of sounds on the basis of an electroencephalogram (EEG) analysis and a personality analysis. We analyze the EEG of the left prefrontal cortex by single-point sensing. For EEG recording, a dry-type sensor and few electrodes were used. The proposed feature extraction method employs gray relational grade detection on the frequency bands of EEG and egogram. The gray relational grade is used for extracting the EEG feature. The egogram is extracted for quantifying the subject’s personality. The preference patterns generated when the subject is hearing a sound are classified using the nearest neighbor method. To show the effectiveness of the proposed method, we conduct experiments using real EEG data. These results show that the accuracy rate of the preference classification using the proposed method is better than that using the method that does not to consider the subject’s personality
    corecore