13,369 research outputs found
Gradient descent for sparse rank-one matrix completion for crowd-sourced aggregation of sparsely interacting workers
We consider worker skill estimation for the singlecoin
Dawid-Skene crowdsourcing model. In
practice skill-estimation is challenging because
worker assignments are sparse and irregular due
to the arbitrary, and uncontrolled availability of
workers. We formulate skill estimation as a
rank-one correlation-matrix completion problem,
where the observed components correspond to
observed label correlation between workers. We
show that the correlation matrix can be successfully
recovered and skills identifiable if and only
if the sampling matrix (observed components) is
irreducible and aperiodic. We then propose an
efficient gradient descent scheme and show that
skill estimates converges to the desired global optima
for such sampling matrices. Our proof is
original and the results are surprising in light of
the fact that even the weighted rank-one matrix
factorization problem is NP hard in general. Next
we derive sample complexity bounds for the noisy
case in terms of spectral properties of the signless
Laplacian of the sampling matrix. Our proposed
scheme achieves state-of-art performance on a
number of real-world datasets.Published versio
The Matrix Ridge Approximation: Algorithms and Applications
We are concerned with an approximation problem for a symmetric positive
semidefinite matrix due to motivation from a class of nonlinear machine
learning methods. We discuss an approximation approach that we call {matrix
ridge approximation}. In particular, we define the matrix ridge approximation
as an incomplete matrix factorization plus a ridge term. Moreover, we present
probabilistic interpretations using a normal latent variable model and a
Wishart model for this approximation approach. The idea behind the latent
variable model in turn leads us to an efficient EM iterative method for
handling the matrix ridge approximation problem. Finally, we illustrate the
applications of the approximation approach in multivariate data analysis.
Empirical studies in spectral clustering and Gaussian process regression show
that the matrix ridge approximation with the EM iteration is potentially
useful
A Compact Formulation for the Mixed-Norm Minimization Problem
Parameter estimation from multiple measurement vectors (MMVs) is a
fundamental problem in many signal processing applications, e.g., spectral
analysis and direction-of- arrival estimation. Recently, this problem has been
address using prior information in form of a jointly sparse signal structure. A
prominent approach for exploiting joint sparsity considers mixed-norm
minimization in which, however, the problem size grows with the number of
measurements and the desired resolution, respectively. In this work we derive
an equivalent, compact reformulation of the mixed-norm
minimization problem which provides new insights on the relation between
different existing approaches for jointly sparse signal reconstruction. The
reformulation builds upon a compact parameterization, which models the
row-norms of the sparse signal representation as parameters of interest,
resulting in a significant reduction of the MMV problem size. Given the sparse
vector of row-norms, the jointly sparse signal can be computed from the MMVs in
closed form. For the special case of uniform linear sampling, we present an
extension of the compact formulation for gridless parameter estimation by means
of semidefinite programming. Furthermore, we derive in this case from our
compact problem formulation the exact equivalence between the
mixed-norm minimization and the atomic-norm minimization. Additionally, for the
case of irregular sampling or a large number of samples, we present a low
complexity, grid-based implementation based on the coordinate descent method
- …