9,153 research outputs found

    Decomposition and dictionary learning for 3D trajectories

    Get PDF
    International audienceA new model for describing a three-dimensional (3D) trajectory is proposed in this paper. The studied trajectory is viewed as a linear combination of rotatable 3D patterns. The resulting model is thus 3D rotation invariant (3DRI). Moreover, the temporal patterns are considered as shift-invariant. This paper is divided into two parts based on this model. On the one hand, the 3DRI decomposition estimates the active patterns, their coefficients, their rotations and their shift parameters. Based on sparse approximation, this is carried out by two non-convex optimizations: 3DRI matching pursuit (3DRI-MP) and 3DRI orthogonal matching pursuit (3DRI-OMP). On the other hand, a 3DRI learning method learns the characteristic patterns of a database through a 3DRI dictionary learning algorithm (3DRI-DLA). The proposed algorithms are first applied to simulation data to evaluate their performances and to compare them to other algorithms. Then, they are applied to real motion data of cued speech, to learn the 3D trajectory patterns characteristic of this gestural language

    A Tensor-Based Dictionary Learning Approach to Tomographic Image Reconstruction

    Full text link
    We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that a) we use a third-order tensor representation for our images and b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.Comment: 29 page

    An MDL framework for sparse coding and dictionary learning

    Full text link
    The power of sparse signal modeling with learned over-complete dictionaries has been demonstrated in a variety of applications and fields, from signal processing to statistical inference and machine learning. However, the statistical properties of these models, such as under-fitting or over-fitting given sets of data, are still not well characterized in the literature. As a result, the success of sparse modeling depends on hand-tuning critical parameters for each data and application. This work aims at addressing this by providing a practical and objective characterization of sparse models by means of the Minimum Description Length (MDL) principle -- a well established information-theoretic approach to model selection in statistical inference. The resulting framework derives a family of efficient sparse coding and dictionary learning algorithms which, by virtue of the MDL principle, are completely parameter free. Furthermore, such framework allows to incorporate additional prior information to existing models, such as Markovian dependencies, or to define completely new problem formulations, including in the matrix analysis area, in a natural way. These virtues will be demonstrated with parameter-free algorithms for the classic image denoising and classification problems, and for low-rank matrix recovery in video applications
    corecore