231,492 research outputs found
Hate Speech Detection for Banjarese Languages on Instagram Using Machine Learning Methods
Hate speech refers to verbal expression or communication that aims to provoke or discriminate against individuals. The Ministry of Communication and Information of Indonesia has encountered and dealt with 3,640 cases of hate speech transmitted through digital channels between 2018 and 2021. Particularly in South Kalimantan, hate speech in the local language, Banjarese has become increasingly prevalent in recent years. Surprisingly, there is a lack of research on using machine learning to detect hate speech in the Banjarese language, specifically on Instagram. Therefore, this study aimed to address this gap by constructing a dataset of Banjarese language hate speech and comparing various feature extraction and machine learning models to detect Banjarese language hate speech effectively. Thisresearch used several feature extraction techniques and machine learning methods to detect Banjareselanguage hate speech. The feature extraction methods used were Word N-Gram, Term Frequency- Inverse Document Frequency (TF-IDF), a combination of Word N-Gram and TF-IDF, Word2Vec, and Glove, while the machine learning methods used were Support Vector Machine (SVM), Na¨ıve Bayes, and Decision Tree. The results of this study revealed that the combination of TF-IDF for feature extraction and SVM as the model achieves exceptional performance. The average Recall, Precision, Accuracy, and F1-Score score exceeded 90%, demonstrating the model’s ability to identify Banjarese hate speech accurately
FedClassAvg: Local Representation Learning for Personalized Federated Learning on Heterogeneous Neural Networks
Personalized federated learning is aimed at allowing numerous clients to
train personalized models while participating in collaborative training in a
communication-efficient manner without exchanging private data. However, many
personalized federated learning algorithms assume that clients have the same
neural network architecture, and those for heterogeneous models remain
understudied. In this study, we propose a novel personalized federated learning
method called federated classifier averaging (FedClassAvg). Deep neural
networks for supervised learning tasks consist of feature extractor and
classifier layers. FedClassAvg aggregates classifier weights as an agreement on
decision boundaries on feature spaces so that clients with not independently
and identically distributed (non-iid) data can learn about scarce labels. In
addition, local feature representation learning is applied to stabilize the
decision boundaries and improve the local feature extraction capabilities for
clients. While the existing methods require the collection of auxiliary data or
model weights to generate a counterpart, FedClassAvg only requires clients to
communicate with a couple of fully connected layers, which is highly
communication-efficient. Moreover, FedClassAvg does not require extra
optimization problems such as knowledge transfer, which requires intensive
computation overhead. We evaluated FedClassAvg through extensive experiments
and demonstrated it outperforms the current state-of-the-art algorithms on
heterogeneous personalized federated learning tasks.Comment: Accepted to ICPP 2022. Code: https://github.com/hukla/fedclassav
MIXTURE FEATURE EXTRACTION BASED ON LOCAL BINARY PATTERN AND GREY-LEVEL CO-OCCURRENCE MATRIX TECHNIQUES FOR MOUTH EXPRESSION RECOGNITION
Some academics struggle to recognize facial emotions based on pattern recognition. In general, this recognition utilizes all facial features. However, this study was limited to identifying facial emotions in a single facial region. In this study, lips, one of the facial features that can reveal a person's expression, are utilized. Using a combination of local binary pattern feature extraction (LBP) and grey level co-occurrence matrix (GLCM) methods and a multiclass support vector machine classification approach for feature extraction in facial images. The concept begins with image segmentation to create an image of a mouth. Experiments were also conducted for various tests, and the outcomes of these experiments revealed a recognition performance of up to 95%. This result was obtained through experiments in which 10% to 40% of the data were evaluated. These findings are beneficial and can be applied to expression recognition in online learning media to monitor the audience's condition directly
- …