3 research outputs found

    Meta-learning for Multi-variable Non-convex Optimization Problems: Iterating Non-optimums Makes Optimum Possible

    Full text link
    In this paper, we aim to address the problem of solving a non-convex optimization problem over an intersection of multiple variable sets. This kind of problems is typically solved by using an alternating minimization (AM) strategy which splits the overall problem into a set of sub-problems corresponding to each variable, and then iteratively performs minimization over each sub-problem using a fixed updating rule. However, due to the intrinsic non-convexity of the overall problem, the optimization can usually be trapped into bad local minimum even when each sub-problem can be globally optimized at each iteration. To tackle this problem, we propose a meta-learning based Global Scope Optimization (GSO) method. It adaptively generates optimizers for sub-problems via meta-learners and constantly updates these meta-learners with respect to the global loss information of the overall problem. Therefore, the sub-problems are optimized with the objective of minimizing the global loss specifically. We evaluate the proposed model on a number of simulations, including solving bi-linear inverse problems: matrix completion, and non-linear problems: Gaussian mixture models. The experimental results show that our proposed approach outperforms AM-based methods in standard settings, and is able to achieve effective optimization in some challenging cases while other methods would typically fail.Comment: 15 pages, 8 figure

    Robust alternative minimization for matrix completion

    No full text
    Recently, much attention has been drawn to the problem of matrix completion, which arises in a number of fields, including computer vision, pattern recognition, sensor network, and recommendation systems. This paper proposes a novel algorithm, named robust alternative minimization (RAM), which is based on the constraint of low rank to complete an unknown matrix. The proposed RAM algorithm can effectively reduce the relative reconstruction error of the recovered matrix. It is numerically easier to minimize the objective function and more stable for large-scale matrix completion compared with other existing methods. It is robust and efficient for low-rank matrix completion, and the convergence of the RAM algorithm is also established. Numerical results showed that both the recovery accuracy and running time of the RAM algorithm are competitive with other reported methods. Moreover, the applications of the RAM algorithm to low-rank image recovery demonstrated that it achieves satisfactory performance
    corecore