6,930 research outputs found
Detection of Review Abuse via Semi-Supervised Binary Multi-Target Tensor Decomposition
Product reviews and ratings on e-commerce websites provide customers with
detailed insights about various aspects of the product such as quality,
usefulness, etc. Since they influence customers' buying decisions, product
reviews have become a fertile ground for abuse by sellers (colluding with
reviewers) to promote their own products or to tarnish the reputation of
competitor's products. In this paper, our focus is on detecting such abusive
entities (both sellers and reviewers) by applying tensor decomposition on the
product reviews data. While tensor decomposition is mostly unsupervised, we
formulate our problem as a semi-supervised binary multi-target tensor
decomposition, to take advantage of currently known abusive entities. We
empirically show that our multi-target semi-supervised model achieves higher
precision and recall in detecting abusive entities as compared to unsupervised
techniques. Finally, we show that our proposed stochastic partial natural
gradient inference for our model empirically achieves faster convergence than
stochastic gradient and Online-EM with sufficient statistics.Comment: Accepted to the 25th ACM SIGKDD Conference on Knowledge Discovery and
Data Mining, 2019. Contains supplementary material. arXiv admin note: text
overlap with arXiv:1804.0383
Online Tensor Methods for Learning Latent Variable Models
We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.Comment: JMLR 201
Block stochastic gradient iteration for convex and nonconvex optimization
The stochastic gradient (SG) method can minimize an objective function
composed of a large number of differentiable functions, or solve a stochastic
optimization problem, to a moderate accuracy. The block coordinate
descent/update (BCD) method, on the other hand, handles problems with multiple
blocks of variables by updating them one at a time; when the blocks of
variables are easier to update individually than together, BCD has a lower
per-iteration cost. This paper introduces a method that combines the features
of SG and BCD for problems with many components in the objective and with
multiple (blocks of) variables.
Specifically, a block stochastic gradient (BSG) method is proposed for
solving both convex and nonconvex programs. At each iteration, BSG approximates
the gradient of the differentiable part of the objective by randomly sampling a
small set of data or sampling a few functions from the sum term in the
objective, and then, using those samples, it updates all the blocks of
variables in either a deterministic or a randomly shuffled order. Its
convergence for both convex and nonconvex cases are established in different
senses. In the convex case, the proposed method has the same order of
convergence rate as the SG method. In the nonconvex case, its convergence is
established in terms of the expected violation of a first-order optimality
condition. The proposed method was numerically tested on problems including
stochastic least squares and logistic regression, which are convex, as well as
low-rank tensor recovery and bilinear logistic regression, which are nonconvex
Unsupervised Generative Modeling Using Matrix Product States
Generative modeling, which learns joint probability distribution from data
and generates samples according to it, is an important task in machine learning
and artificial intelligence. Inspired by probabilistic interpretation of
quantum physics, we propose a generative model using matrix product states,
which is a tensor network originally proposed for describing (particularly
one-dimensional) entangled quantum states. Our model enjoys efficient learning
analogous to the density matrix renormalization group method, which allows
dynamically adjusting dimensions of the tensors and offers an efficient direct
sampling approach for generative tasks. We apply our method to generative
modeling of several standard datasets including the Bars and Stripes, random
binary patterns and the MNIST handwritten digits to illustrate the abilities,
features and drawbacks of our model over popular generative models such as
Hopfield model, Boltzmann machines and generative adversarial networks. Our
work sheds light on many interesting directions of future exploration on the
development of quantum-inspired algorithms for unsupervised machine learning,
which are promisingly possible to be realized on quantum devices.Comment: 11 pages, 12 figures (not including the TNs) GitHub Page:
https://congzlwag.github.io/UnsupGenModbyMPS
- …