155 research outputs found
Bivariate Beta-LSTM
Long Short-Term Memory (LSTM) infers the long term dependency through a cell
state maintained by the input and the forget gate structures, which models a
gate output as a value in [0,1] through a sigmoid function. However, due to the
graduality of the sigmoid function, the sigmoid gate is not flexible in
representing multi-modality or skewness. Besides, the previous models lack
modeling on the correlation between the gates, which would be a new method to
adopt inductive bias for a relationship between previous and current input.
This paper proposes a new gate structure with the bivariate Beta distribution.
The proposed gate structure enables probabilistic modeling on the gates within
the LSTM cell so that the modelers can customize the cell state flow with
priors and distributions. Moreover, we theoretically show the higher upper
bound of the gradient compared to the sigmoid function, and we empirically
observed that the bivariate Beta distribution gate structure provides higher
gradient values in training. We demonstrate the effectiveness of bivariate Beta
gate structure on the sentence classification, image classification, polyphonic
music modeling, and image caption generation.Comment: AAAI 202
An Analysis of Convolutional Neural Networks for Sentence Classification
Over the past few years, neural networks have reemerged as powerful machine-learning models, yielding state-ofthe- art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This paper show a series of experiments with Convolutional Neural Networks for sentence-level classification tasks with different hyperparameter settings and how sensitive model performance is to changes in these configurations.Sociedad Argentina de Informática e Investigación Operativa (SADIO
End-to-End Differentiable Proving
We introduce neural networks for end-to-end differentiable proving of queries
to knowledge bases by operating on dense vector representations of symbols.
These neural networks are constructed recursively by taking inspiration from
the backward chaining algorithm as used in Prolog. Specifically, we replace
symbolic unification with a differentiable computation on vector
representations of symbols using a radial basis function kernel, thereby
combining symbolic reasoning with learning subsymbolic vector representations.
By using gradient descent, the resulting neural network can be trained to infer
facts from a given incomplete knowledge base. It learns to (i) place
representations of similar symbols in close proximity in a vector space, (ii)
make use of such similarities to prove queries, (iii) induce logical rules, and
(iv) use provided and induced logical rules for multi-hop reasoning. We
demonstrate that this architecture outperforms ComplEx, a state-of-the-art
neural link prediction model, on three out of four benchmark knowledge bases
while at the same time inducing interpretable function-free first-order logic
rules.Comment: NIPS 2017 camera-ready, NIPS 201
A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods
Opinion and sentiment analysis is a vital task to characterize subjective
information in social media posts. In this paper, we present a comprehensive
experimental evaluation and comparison with six state-of-the-art methods, from
which we have re-implemented one of them. In addition, we investigate different
textual and visual feature embeddings that cover different aspects of the
content, as well as the recently introduced multimodal CLIP embeddings.
Experimental results are presented for two different publicly available
benchmark datasets of tweets and corresponding images. In contrast to the
evaluation methodology of previous work, we introduce a reproducible and fair
evaluation scheme to make results comparable. Finally, we conduct an error
analysis to outline the limitations of the methods and possibilities for the
future work.Comment: Accepted in Workshop on Multi-ModalPre-Training for Multimedia
Understanding (MMPT 2021), co-located with ICMR 202
Cognitive Representation Learning of Self-Media Online Article Quality
The automatic quality assessment of self-media online articles is an urgent
and new issue, which is of great value to the online recommendation and search.
Different from traditional and well-formed articles, self-media online articles
are mainly created by users, which have the appearance characteristics of
different text levels and multi-modal hybrid editing, along with the potential
characteristics of diverse content, different styles, large semantic spans and
good interactive experience requirements. To solve these challenges, we
establish a joint model CoQAN in combination with the layout organization,
writing characteristics and text semantics, designing different representation
learning subnetworks, especially for the feature learning process and
interactive reading habits on mobile terminals. It is more consistent with the
cognitive style of expressing an expert's evaluation of articles. We have also
constructed a large scale real-world assessment dataset. Extensive experimental
results show that the proposed framework significantly outperforms
state-of-the-art methods, and effectively learns and integrates different
factors of the online article quality assessment.Comment: Accepted at the Proceedings of the 28th ACM International Conference
on Multimedi
- …