384 research outputs found
Simultaneous Feature Learning and Hash Coding with Deep Neural Networks
Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods.Comment: This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 201
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm
The nuclear norm is widely used as a convex surrogate of the rank function in
compressive sensing for low rank matrix recovery with its applications in image
recovery and signal processing. However, solving the nuclear norm based relaxed
convex problem usually leads to a suboptimal solution of the original rank
minimization problem. In this paper, we propose to perform a family of
nonconvex surrogates of -norm on the singular values of a matrix to
approximate the rank function. This leads to a nonconvex nonsmooth minimization
problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear
Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value
Thresholding (WSVT) problem, which has a closed form solution due to the
special properties of the nonconvex surrogate functions. We also extend IRNN to
solve the nonconvex problem with two or more blocks of variables. In theory, we
prove that IRNN decreases the objective function value monotonically, and any
limit point is a stationary point. Extensive experiments on both synthesized
data and real images demonstrate that IRNN enhances the low-rank matrix
recovery compared with state-of-the-art convex algorithms
Proximal Iteratively Reweighted Algorithm with Multiple Splitting for Nonconvex Sparsity Optimization
This paper proposes the Proximal Iteratively REweighted (PIRE) algorithm for
solving a general problem, which involves a large body of nonconvex sparse and
structured sparse related problems. Comparing with previous iterative solvers
for nonconvex sparse problem, PIRE is much more general and efficient. The
computational cost of PIRE in each iteration is usually as low as the
state-of-the-art convex solvers. We further propose the PIRE algorithm with
Parallel Splitting (PIRE-PS) and PIRE algorithm with Alternative Updating
(PIRE-AU) to handle the multi-variable problems. In theory, we prove that our
proposed methods converge and any limit solution is a stationary point.
Extensive experiments on both synthesis and real data sets demonstrate that our
methods achieve comparative learning performance, but are much more efficient,
by comparing with previous nonconvex solvers
Nonconvex Sparse Spectral Clustering by Alternating Direction Method of Multipliers and Its Convergence Analysis
Spectral Clustering (SC) is a widely used data clustering method which first
learns a low-dimensional embedding of data by computing the eigenvectors of
the normalized Laplacian matrix, and then performs k-means on to get
the final clustering result. The Sparse Spectral Clustering (SSC) method
extends SC with a sparse regularization on by using the block
diagonal structure prior of in the ideal case. However, encouraging
to be sparse leads to a heavily nonconvex problem which is
challenging to solve and the work (Lu, Yan, and Lin 2016) proposes a convex
relaxation in the pursuit of this aim indirectly. However, the convex
relaxation generally leads to a loose approximation and the quality of the
solution is not clear. This work instead considers to solve the nonconvex
formulation of SSC which directly encourages to be sparse. We propose
an efficient Alternating Direction Method of Multipliers (ADMM) to solve the
nonconvex SSC and provide the convergence guarantee. In particular, we prove
that the sequences generated by ADMM always exist a limit point and any limit
point is a stationary point. Our analysis does not impose any assumptions on
the iterates and thus is practical. Our proposed ADMM for nonconvex problems
allows the stepsize to be increasing but upper bounded, and this makes it very
efficient in practice. Experimental analysis on several real data sets verifies
the effectiveness of our method.Comment: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).
201
Generalized Nonconvex Nonsmooth Low-Rank Minimization
As surrogate functions of -norm, many nonconvex penalty functions have
been proposed to enhance the sparse vector recovery. It is easy to extend these
nonconvex penalty functions on singular values of a matrix to enhance low-rank
matrix recovery. However, different from convex optimization, solving the
nonconvex low-rank minimization problem is much more challenging than the
nonconvex sparse minimization problem. We observe that all the existing
nonconvex penalty functions are concave and monotonically increasing on
. Thus their gradients are decreasing functions. Based on this
property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to
solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively
solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the
weight vector as the gradient of the concave penalty function, the WSVT problem
has a closed form solution. In theory, we prove that IRNN decreases the
objective function value monotonically, and any limit point is a stationary
point. Extensive experiments on both synthetic data and real images demonstrate
that IRNN enhances the low-rank matrix recovery compared with state-of-the-art
convex algorithms.Comment: IEEE International Conference on Computer Vision and Pattern
Recognition, 201
- β¦