290 research outputs found
A Semantic Modular Framework for Events Topic Modeling in Social Media
The advancement of social media contributes to the growing amount of content
they share frequently. This framework provides a sophisticated place for people
to report various real-life events. Detecting these events with the help of
natural language processing has received researchers' attention, and various
algorithms have been developed for this goal. In this paper, we propose a
Semantic Modular Model (SMM) consisting of 5 different modules, namely
Distributional Denoising Autoencoder, Incremental Clustering, Semantic
Denoising, Defragmentation, and Ranking and Processing. The proposed model aims
to (1) cluster various documents and ignore the documents that might not
contribute to the identification of events, (2) identify more important and
descriptive keywords. Compared to the state-of-the-art methods, the results
show that the proposed model has a higher performance in identifying events
with lower ranks and extracting keywords for more important events in three
English Twitter datasets: FACup, SuperTuesday, and USElection. The proposed
method outperformed the best reported results in the mean keyword-precision
metric by 7.9\%.Comment: 32 pages, 2 figure
Explaining Recommendation System Using Counterfactual Textual Explanations
Currently, there is a significant amount of research being conducted in the
field of artificial intelligence to improve the explainability and
interpretability of deep learning models. It is found that if end-users
understand the reason for the production of some output, it is easier to trust
the system. Recommender systems are one example of systems that great efforts
have been conducted to make their output more explainable. One method for
producing a more explainable output is using counterfactual reasoning, which
involves altering minimal features to generate a counterfactual item that
results in changing the output of the system. This process allows the
identification of input features that have a significant impact on the desired
output, leading to effective explanations. In this paper, we present a method
for generating counterfactual explanations for both tabular and textual
features. We evaluated the performance of our proposed method on three
real-world datasets and demonstrated a +5\% improvement on finding effective
features (based on model-based measures) compared to the baseline method
X-CapsNet For Fake News Detection
News consumption has significantly increased with the growing popularity and
use of web-based forums and social media. This sets the stage for misinforming
and confusing people. To help reduce the impact of misinformation on users'
potential health-related decisions and other intents, it is desired to have
machine learning models to detect and combat fake news automatically. This
paper proposes a novel transformer-based model using Capsule neural
Networks(CapsNet) called X-CapsNet. This model includes a CapsNet with dynamic
routing algorithm paralyzed with a size-based classifier for detecting short
and long fake news statements. We use two size-based classifiers, a Deep
Convolutional Neural Network (DCNN) for detecting long fake news statements and
a Multi-Layer Perceptron (MLP) for detecting short news statements. To resolve
the problem of representing short news statements, we use indirect features of
news created by concatenating the vector of news speaker profiles and a vector
of polarity, sentiment, and counting words of news statements. For evaluating
the proposed architecture, we use the Covid-19 and the Liar datasets. The
results in terms of the F1-score for the Covid-19 dataset and accuracy for the
Liar dataset show that models perform better than the state-of-the-art
baselines
Multi Sentence Description of Complex Manipulation Action Videos
Automatic video description requires the generation of natural language
statements about the actions, events, and objects in the video. An important
human trait, when we describe a video, is that we are able to do this with
variable levels of detail. Different from this, existing approaches for
automatic video descriptions are mostly focused on single sentence generation
at a fixed level of detail. Instead, here we address video description of
manipulation actions where different levels of detail are required for being
able to convey information about the hierarchical structure of these actions
relevant also for modern approaches of robot learning. We propose one hybrid
statistical and one end-to-end framework to address this problem. The hybrid
method needs much less data for training, because it models statistically
uncertainties within the video clips, while in the end-to-end method, which is
more data-heavy, we are directly connecting the visual encoder to the language
decoder without any intermediate (statistical) processing step. Both frameworks
use LSTM stacks to allow for different levels of description granularity and
videos can be described by simple single-sentences or complex multiple-sentence
descriptions. In addition, quantitative results demonstrate that these methods
produce more realistic descriptions than other competing approaches
- …