1 research outputs found
Learning Random Kernel Approximations for Object Recognition
Approximations based on random Fourier features have recently emerged as an
efficient and formally consistent methodology to design large-scale kernel
machines. By expressing the kernel as a Fourier expansion, features are
generated based on a finite set of random basis projections, sampled from the
Fourier transform of the kernel, with inner products that are Monte Carlo
approximations of the original kernel. Based on the observation that different
kernel-induced Fourier sampling distributions correspond to different kernel
parameters, we show that an optimization process in the Fourier domain can be
used to identify the different frequency bands that are useful for prediction
on training data. Moreover, the application of group Lasso to random feature
vectors corresponding to a linear combination of multiple kernels, leads to
efficient and scalable reformulations of the standard multiple kernel learning
model \cite{Varma09}. In this paper we develop the linear Fourier approximation
methodology for both single and multiple gradient-based kernel learning and
show that it produces fast and accurate predictors on a complex dataset such as
the Visual Object Challenge 2011 (VOC2011)