88 research outputs found
Sequential Prediction of Social Media Popularity with Deep Temporal Context Networks
Prediction of popularity has profound impact for social media, since it
offers opportunities to reveal individual preference and public attention from
evolutionary social systems. Previous research, although achieves promising
results, neglects one distinctive characteristic of social data, i.e.,
sequentiality. For example, the popularity of online content is generated over
time with sequential post streams of social media. To investigate the
sequential prediction of popularity, we propose a novel prediction framework
called Deep Temporal Context Networks (DTCN) by incorporating both temporal
context and temporal attention into account. Our DTCN contains three main
components, from embedding, learning to predicting. With a joint embedding
network, we obtain a unified deep representation of multi-modal user-post data
in a common embedding space. Then, based on the embedded data sequence over
time, temporal context learning attempts to recurrently learn two adaptive
temporal contexts for sequential popularity. Finally, a novel temporal
attention is designed to predict new popularity (the popularity of a new
user-post pair) with temporal coherence across multiple time-scales.
Experiments on our released image dataset with about 600K Flickr photos
demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms,
with an average of 21.51% relative performance improvement in the popularity
prediction (Spearman Ranking Correlation).Comment: accepted in IJCAI-1
Conclusion-Supplement Answer Generation for Non-Factoid Questions
This paper tackles the goal of conclusion-supplement answer generation for
non-factoid questions, which is a critical issue in the field of Natural
Language Processing (NLP) and Artificial Intelligence (AI), as users often
require supplementary information before accepting a conclusion. The current
encoder-decoder framework, however, has difficulty generating such answers,
since it may become confused when it tries to learn several different long
answers to the same non-factoid question. Our solution, called an ensemble
network, goes beyond single short sentences and fuses logically connected
conclusion statements and supplementary statements. It extracts the context
from the conclusion decoder's output sequence and uses it to create
supplementary decoder states on the basis of an attention mechanism. It also
assesses the closeness of the question encoder's output sequence and the
separate outputs of the conclusion and supplement decoders as well as their
combination. As a result, it generates answers that match the questions and
have natural-sounding supplementary sequences in line with the context
expressed by the conclusion sequence. Evaluations conducted on datasets
including "Love Advice" and "Arts & Humanities" categories indicate that our
model outputs much more accurate results than the tested baseline models do.Comment: AAAI-2020 (Accepted
Prediction of Answer Keywords using Char-RNN
Generating sequences of characters using a Recurrent Neural Network (RNN) is a tried and tested method for creating unique and context aware words, and is fundamental in Natural Language Processing tasks. These type of Neural Networks can also be used a question-answering system. The main drawback of most of these systems is that they work from a factoid database of information, and when queried about new and current information, the responses are usually bleak. In this paper, the author proposes a novel approach to finding answer keywords from a given body of news text or headline, based on the query that was asked, where the query would be of the nature of current affairs or recent news, with the use of Gated Recurrent Unit (GRU) variant of RNNs. Thus, this ensures that the answers provided are relevant to the content of query that was put forth
- …