277 research outputs found

    Learning Multiple Visual Tasks while Discovering their Structure

    Get PDF
    Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e.g. object detection, classification, tracking of multiple agents, or denoising, to name a few. The key idea is that exploring task relatedness (structure) can lead to improved performances. In this paper, we propose and study a novel sparse, non-parametric approach exploiting the theory of Reproducing Kernel Hilbert Spaces for vector-valued functions. We develop a suitable regularization framework which can be formulated as a convex optimization problem, and is provably solvable using an alternating minimization approach. Empirical tests show that the proposed method compares favorably to state of the art techniques and further allows to recover interpretable structures, a problem of interest in its own right.Comment: 19 pages, 3 figures, 3 table

    Non-convex regularization in remote sensing

    Get PDF
    In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.Comment: 11 pages, 11 figure

    Robust Image Analysis by L1-Norm Semi-supervised Learning

    Full text link
    This paper presents a novel L1-norm semi-supervised learning algorithm for robust image analysis by giving new L1-norm formulation of Laplacian regularization which is the key step of graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is defined directly over the eigenvectors of the normalized Laplacian matrix, we successfully formulate semi-supervised learning as an L1-norm linear reconstruction problem which can be effectively solved with sparse coding. By working with only a small subset of eigenvectors, we further develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding, the proposed algorithm can deal with the noise in the data to some extent and thus has important applications to robust image analysis, such as noise-robust image classification and noise reduction for visual and textual bag-of-words (BOW) models. In particular, this paper is the first attempt to obtain robust image representation by sparse co-refinement of visual and textual BOW models. The experimental results have shown the promising performance of the proposed algorithm.Comment: This is an extension of our long paper in ACM MM 201
    • …
    corecore