4 research outputs found
Exploring Algorithmic Limits of Matrix Rank Minimization under Affine Constraints
Many applications require recovering a matrix of minimal rank within an
affine constraint set, with matrix completion a notable special case. Because
the problem is NP-hard in general, it is common to replace the matrix rank with
the nuclear norm, which acts as a convenient convex surrogate. While elegant
theoretical conditions elucidate when this replacement is likely to be
successful, they are highly restrictive and convex algorithms fail when the
ambient rank is too high or when the constraint set is poorly structured.
Non-convex alternatives fare somewhat better when carefully tuned; however,
convergence to locally optimal solutions remains a continuing source of
failure. Against this backdrop we derive a deceptively simple and
parameter-free probabilistic PCA-like algorithm that is capable, over a wide
battery of empirical tests, of successful recovery even at the theoretical
limit where the number of measurements equal the degrees of freedom in the
unknown low-rank matrix. Somewhat surprisingly, this is possible even when the
affine constraint set is highly ill-conditioned. While proving general recovery
guarantees remains evasive for non-convex algorithms, Bayesian-inspired or
otherwise, we nonetheless show conditions whereby the underlying cost function
has a unique stationary point located at the global optimum; no existing cost
function we are aware of satisfies this same property. We conclude with a
simple computer vision application involving image rectification and a standard
collaborative filtering benchmark
Acceleration of Nonlinear Dimensionality Reduction Algorithm for Matrix Completion Based on Probability Density Maximization
This paper deals with a nonlinear matrix completion problem, which the column vectors of the target matrix belong to a low-dimensional manifold. This problem has been applied to fields such as image processing and audio processing. Traditionally, this problem is solved using the low rank of the matrix, assuming that the matrix vector belongs to a low-dimensional linear space. However, since the method using the rank of the matrix is NP-hard, various alternative methods have been proposed. The accuracy of these deteriorates when each column vector of the target matrix belongs to a low-dimensional manifold. Therefore, a new algorithm has been proposed that focuses on the local neighborhood assuming that each column vector of the matrix belongs to the Gaussian distribution. This algorithm to solve the problem with high accuracy has been proposed, which the algorithm maximizes a weighted mean of log joint probability density of the column vectors. However the calculation takes a lot of time when the number of the columns is large. Therefore, we propose a new method to reduce the amount of computation, which is to apply a threshold function to make the computation sparse. Numerical examples show the effectiveness of the proposed method