19 research outputs found

    Joint Majorization-Minimization for Nonnegative Matrix Factorization with the β\beta-divergence

    Full text link
    This article proposes new multiplicative updates for nonnegative matrix factorization (NMF) with the β\beta-divergence objective function. Our new updates are derived from a joint majorization-minimization (MM) scheme, in which an auxiliary function (a tight upper bound of the objective function) is built for the two factors jointly and minimized at each iteration. This is in contrast with the classic approach in which a majorizer is derived for each factor separately. Like that classic approach, our joint MM algorithm also results in multiplicative updates that are simple to implement. They however yield a significant drop of computation time (for equally good solutions), in particular for some β\beta-divergences of important applicative interest, such as the squared Euclidean distance and the Kullback-Leibler or Itakura-Saito divergences. We report experimental results using diverse datasets: face images, an audio spectrogram, hyperspectral data and song play counts. Depending on the value of β\beta and on the dataset, our joint MM approach can yield CPU time reductions from about 13%13\% to 78%78\% in comparison to the classic alternating scheme

    A robust shifted proper orthogonal decomposition: Proximal methods for decomposing flows with multiple transports

    Full text link
    We present a new methodology for decomposing flows with multiple transports that further extends the shifted proper orthogonal decomposition (sPOD). The sPOD tries to approximate transport-dominated flows by a sum of co-moving data fields. The proposed methods stem from sPOD but optimize the co-moving fields directly and penalize their nuclear norm to promote low rank of the individual data in the decomposition. Furthermore, we add a robustness term to the decomposition that can deal with interpolation error and data noises. Leveraging tools from convex optimization, we derive three proximal algorithms to solve the decomposition problem. We report a numerical comparison with existing methods against synthetic data benchmarks and then show the separation ability of our methods on 1D and 2D incompressible and reactive flows. The resulting methodology is the basis of a new analysis paradigm that results in the same interpretability as the POD for the individual co-moving fields.Comment: 22 pages, 9 figures, preprin

    How to globally solve non-convex optimization problems involving an approximate â„“ 0 penalization

    Get PDF
    International audienceFor dealing with sparse models, a large number of continuous approximations of the 0 penalization have been proposed. However, the most accurate ones lead to non-convex optimization problems. In this paper, by observing that many such approximations are piecewise rational functions, we show that the original optimization problem can be recast as a multivariate polynomial problem. The latter is then globally solved by using recent optimization methods which consist of building a hierarchy of convex problems. Finally, experimental results illustrate that our method always provides a global optimum of the initial problem for standard 0 approximations. This is in contrast with existing local algorithms whose results depend on the initialization

    Rational optimization for nonlinear reconstruction with approximate â„“0 penalization

    Get PDF
    International audienceRecovering nonlinearly degraded signal in the presence of noise is a challenging problem. In this work, this problem is tackled by minimizing the sum of a non convex least-squares fit criterion and a penalty term. We assume that the nonlinearity of the model can be accounted for by a rational function. In addition, we suppose that the signal to be sought is sparse and a rational approximation of the â„“0 pseudo-norm thus constitutes a suitable penalization. The resulting composite cost function belongs to the broad class of semi-algebraic functions. To find a globally optimal solution to such an optimization problem, it can be transformed into a generalized moment problem, for which a hierarchy of semidefinite programming relaxations can be built. Global optimality comes at the expense of an increased dimension and, to overcome computational limitations concerning the number of involved variables, the structure of the problem has to be carefully addressed. A situation of practical interest is when the nonlinear model consists of a convolutive transform followed by a componentwise nonlinear rational saturation. We then propose to use a sparse relaxation able to deal with up to several hundreds of optimized variables. In contrast with the naive approach consisting of linearizing the model, our experiments show that the proposed approach offers good performanc

    Weight Identification Through Global Optimization in a New Hysteretic Neural Network Model

    Get PDF
    International audienceUnlike their biological counterparts, simple artificial neural networks are unable to retain information from their past state to influence their behavior. In this contribution, we propose to consider new nonlinear activation functions, whose outputs depend both from the current and past inputs through a hysteresis effect. This hysteresis model is developed in the framework of convolutional neural networks. We then show that, by choosing the nonlinearity in the vast class of rational functions, the identification of the weights amounts to solving a rational optimization problem. For the latter, recent methods are applicable that come with global optimality guarantee, contrary to most optimization methods used in the neural network community. Finally, simulations show that such hysteresis nonlinear activation functions cannot be approximated by traditional ones and illustrate the effectiveness of our weight identification method

    Modèles rationnels optimisés de manière exacte pour la résolution de problèmes de traitement du signal

    No full text
    A wide class of nonconvex optimization problem is represented by rational optimization problems. The latter appear naturally in many areas such as signal processing or chemical engineering. However, finding the global optima of such problems is intricate. A recent approach called Lasserre's hierarchy provides a sequence of convex problems that has the theoretical guarantee to converge to the global optima. Nevertheless, this approach is computationally challenging due to the high dimensions of the convex relaxations. In this thesis, we tackle this challenge for various signal processing problems.First, we formulate the reconstruction of sparse signals as a rational optimization problem. We show that the latter has a structure that we wan exploit in order to reduce the complexity of the associated relaxations. We thus solve several practical problems such as the reconstruction of chromatography signals. We also extend our method to the reconstruction of various types of signal corrupted by different noise models.In a second part, we study the convex relaxations generated by our problems which take the form of high-dimensional semi-definite programming problems. We consider several algorithms mainly based on proximal operators to solve those high-dimensional problems efficiently.The last part of this thesis is dedicated to the link between polynomial optimization and symmetric tensor decomposition. Indeed, they both can be seen as an instance of the moment problem. We thereby propose a detection method as well as a decomposition algorithm for symmetric tensors based on the tools used in polynomial optimization. In parallel, we suggest a robust extraction method for polynomial optimization based on tensor decomposition algorithms. Those methods are illustrated on signal processing problems.Une vaste classe de problèmes d'optimisation non convexes est celle de l'optimisation rationnelle. Cette dernière apparaît naturellement dans de nombreux domaines tels que le traitement du signal ou le génie des procédés. Toutefois, trouver les optima globaux pour ces problèmes est difficile. Une approche récente, appelée la hiérarchie de Lasserre, fournit néanmoins une suite de problèmes convexes assurée de converger vers le minimum global. Cependant, cette approche représente un défi calculatoire du fait de la très grande dimension de ses relaxations. Dans cette thèse, nous abordons ce défi pour divers problèmes de traitement du signal.Dans un premier temps, nous formulons la reconstruction de signaux parcimonieux en un problème d'optimisation rationnelle. Nous montrons alors que ce dernier possède une structure que nous exploitons afin de réduire la complexité des relaxations associées. Nous pouvons ainsi résoudre plusieurs problèmes pratiques comme la restoration de signaux de chromatographie. Nous étendons également notre méthode à la restoration de signaux dans différents contextes en proposant plusieurs modèles de bruit et de signal. Dans une deuxième partie, nous étudions les relaxations convexes générées par nos problèmes et qui se présentent sous la forme de problèmes d'optimisation semi-définie positive de très grandes dimensions. Nous considérons plusieurs algorithmes basés sur les opérateurs proximaux pour les résoudre efficacement.La dernière partie de cette thèse est consacrée au lien entre les problèmes d'optimisation polynomiaux et la décomposition de tenseurs symétriques. En effet, ces derniers peuvent être tous deux vus comme une instance du problème des moments. Nous proposons ainsi une méthode de détection de rang et de décomposition pour les tenseurs symétriques basée sur les outils connus en optimisation polynomiale. Parallèlement, nous proposons une technique d'extraction robuste des solutions d'un problème d'optimisation poylnomiale basée sur les algorithmes de décomposition de tenseurs. Ces méthodes sont illustrées sur des problèmes de traitement du signal

    Sparse signal reconstruction with a sign oracle

    Get PDF
    International audienceSparse signal reconstruction is performed by minimizing the sum of a least-squares fit regularized with a piecewise rational approximation of 0. We show the benefit of an oracle that yields the sign of the signal when using a recent methodology for global polynomial or semi-algebraic minimization. The computational time and memory cost are both decrease

    Robust reconstruction with nonconvex subset constraints: a polynomial optimization approach

    Get PDF
    International audienceIn this paper, we are interested in the recovery of an unknown signal corrupted by a linear operator, a nonlinear function, and an additive Gaussian noise. In addition, some of the observations contain outliers. Many robust data fit functions which alleviate sensitivity to outliers can be expressed as piecewise rational functions. Based on this fact, we reformulate the robust inverse problem as a rational optimization problem. The considered framework allows us to incorporate nonconvex constraints such as unions of subsets. The rational problem is then solved using recent optimization techniques which offer guarantees for global optimality. Finally, experimental results illustrate the validity of the recovered global solutions and the good quality of the reconstructed signals despite the presence of outliers
    corecore