168 research outputs found
Hybrid Collaborative Filtering with Autoencoders
Collaborative Filtering aims at exploiting the feedback of users to provide
personalised recommendations. Such algorithms look for latent variables in a
large sparse matrix of ratings. They can be enhanced by adding side information
to tackle the well-known cold start problem. While Neu-ral Networks have
tremendous success in image and speech recognition, they have received less
attention in Collaborative Filtering. This is all the more surprising that
Neural Networks are able to discover latent variables in large and
heterogeneous datasets. In this paper, we introduce a Collaborative Filtering
Neural network architecture aka CFN which computes a non-linear Matrix
Factorization from sparse rating inputs and side information. We show
experimentally on the MovieLens and Douban dataset that CFN outper-forms the
state of the art and benefits from side information. We provide an
implementation of the algorithm as a reusable plugin for Torch, a popular
Neural Network framework
Selective Sampling with Drift
Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting
Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch
Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.Comment: 9 page
Efficient Algorithms and Error Analysis for the Modified Nystrom Method
Many kernel methods suffer from high time and space complexities and are thus
prohibitive in big-data applications. To tackle the computational challenge,
the Nystr\"om method has been extensively used to reduce time and space
complexities by sacrificing some accuracy. The Nystr\"om method speedups
computation by constructing an approximation of the kernel matrix using only a
few columns of the matrix. Recently, a variant of the Nystr\"om method called
the modified Nystr\"om method has demonstrated significant improvement over the
standard Nystr\"om method in approximation accuracy, both theoretically and
empirically.
In this paper, we propose two algorithms that make the modified Nystr\"om
method practical. First, we devise a simple column selection algorithm with a
provable error bound. Our algorithm is more efficient and easier to implement
than and nearly as accurate as the state-of-the-art algorithm. Second, with the
selected columns at hand, we propose an algorithm that computes the
approximation in lower time complexity than the approach in the previous work.
Furthermore, we prove that the modified Nystr\"om method is exact under certain
conditions, and we establish a lower error bound for the modified Nystr\"om
method.Comment: 9-page paper plus appendix. In Proceedings of the 17th International
Conference on Artificial Intelligence and Statistics (AISTATS) 2014,
Reykjavik, Iceland. JMLR: W&CP volume 3
- …