1,665 research outputs found
Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning
Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In
these media, dynamic and still elements are juxtaposed to create an artistic
and narrative experience. Creating a high-quality, aesthetically pleasing
cinemagraph requires isolating objects in a semantically meaningful way and
then selecting good start times and looping periods for those objects to
minimize visual artifacts (such a tearing). To achieve this, we present a new
technique that uses object recognition and semantic segmentation as part of an
optimization method to automatically create cinemagraphs from videos that are
both visually appealing and semantically meaningful. Given a scene with
multiple objects, there are many cinemagraphs one could create. Our method
evaluates these multiple candidates and presents the best one, as determined by
a model trained to predict human preferences in a collaborative way. We
demonstrate the effectiveness of our approach with multiple results and a user
study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary
materia
Coupled Depth Learning
In this paper we propose a method for estimating depth from a single image
using a coarse to fine approach. We argue that modeling the fine depth details
is easier after a coarse depth map has been computed. We express a global
(coarse) depth map of an image as a linear combination of a depth basis learned
from training examples. The depth basis captures spatial and statistical
regularities and reduces the problem of global depth estimation to the task of
predicting the input-specific coefficients in the linear combination. This is
formulated as a regression problem from a holistic representation of the image.
Crucially, the depth basis and the regression function are {\bf coupled} and
jointly optimized by our learning scheme. We demonstrate that this results in a
significant improvement in accuracy compared to direct regression of depth
pixel values or approaches learning the depth basis disjointly from the
regression function. The global depth estimate is then used as a guidance by a
local refinement method that introduces depth details that were not captured at
the global level. Experiments on the NYUv2 and KITTI datasets show that our
method outperforms the existing state-of-the-art at a considerably lower
computational cost for both training and testing.Comment: 10 pages, 3 Figures, 4 Tables with quantitative evaluation
A representer theorem for deep kernel learning
In this paper we provide a finite-sample and an infinite-sample representer
theorem for the concatenation of (linear combinations of) kernel functions of
reproducing kernel Hilbert spaces. These results serve as mathematical
foundation for the analysis of machine learning algorithms based on
compositions of functions. As a direct consequence in the finite-sample case,
the corresponding infinite-dimensional minimization problems can be recast into
(nonlinear) finite-dimensional minimization problems, which can be tackled with
nonlinear optimization algorithms. Moreover, we show how concatenated machine
learning problems can be reformulated as neural networks and how our
representer theorem applies to a broad class of state-of-the-art deep learning
methods
Deep and self-taught learning for protein accessible surface area prediction
ASA captures the degree of burial or surface accessibility of a protein residue. It is a very important indicator of the behavior of amino acids within a protein as well. It can be used to find protein interactions, interfaces, folding states, etc. Calculation of the ASA requires the presence of the structure of the protein. However, structure determination for proteins is expensive and requires significant technical effort. As a consequence, the prediction of ASA is a very important and fundamental problem in Bioinformatics and Proteomics. In this work, we have investigated self-taught machine learning methods along with deep neural network to predict the residue level accessible surface area (ASA) of a protein. We have found that deep learning neural networks can predict the ASA of the residues in a protein accurately. Furthermore, the proposed deep learning based method does not require the use of computationally demanding features such as the position specific scoring matrix (PSSM) which have been used in previous works. A simple Blosum62 matrix based position dependent representation of amino acids in a sequence window gives comparable performance. This is particularly attractive for proteome wide prediction of ASA. We have used various self-taught learning schemes for obtaining an optimal feature representation from unlabeled data. These include a sparse and regularized autoencoder neural network and a dictionary based learning scheme. We have used unlabeled data from the protein universe in an attempt to improve the feature representation. We have also evaluated the performance of a stochastic gradient based predictor of accessible surface area for different feature representations
Connections Between Adaptive Control and Optimization in Machine Learning
This paper demonstrates many immediate connections between adaptive control
and optimization methods commonly employed in machine learning. Starting from
common output error formulations, similarities in update law modifications are
examined. Concepts in stability, performance, and learning, common to both
fields are then discussed. Building on the similarities in update laws and
common concepts, new intersections and opportunities for improved algorithm
analysis are provided. In particular, a specific problem related to higher
order learning is solved through insights obtained from these intersections.Comment: 18 page
A Bayesian Poisson-Gaussian Process Model for Popularity Learning in Edge-Caching Networks
Edge-caching is recognized as an efficient technique for future cellular
networks to improve network capacity and user-perceived quality of experience.
To enhance the performance of caching systems, designing an accurate content
request prediction algorithm plays an important role. In this paper, we develop
a flexible model, a Poisson regressor based on a Gaussian process, for the
content request distribution.
The first important advantage of the proposed model is that it encourages the
already existing or seen contents with similar features to be correlated in the
feature space and therefore it acts as a regularizer for the estimation.
Second, it allows to predict the popularities of newly-added or unseen contents
whose statistical data is not available in advance. In order to learn the model
parameters, which yield the Poisson arrival rates or alternatively the content
\textit{popularities}, we invoke the Bayesian approach which is robust against
over-fitting.
However, the resulting posterior distribution is analytically intractable to
compute. To tackle this, we apply a Markov Chain Monte Carlo (MCMC) method to
approximate this distribution which is also asymptotically exact. Nevertheless,
the MCMC is computationally demanding especially when the number of contents is
large. Thus, we employ the Variational Bayes (VB) method as an alternative low
complexity solution. More specifically, the VB method addresses the
approximation of the posterior distribution through an optimization problem.
Subsequently, we present a fast block-coordinate descent algorithm to solve
this optimization problem. Finally, extensive simulation results both on
synthetic and real-world datasets are provided to show the accuracy of our
prediction algorithm and the cache hit ratio (CHR) gain compared to existing
methods from the literature
- …