86 research outputs found
Recommended from our members
Optimization Algorithms for Structured Machine Learning and Image Processing Problems
Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images
A rank-adaptive robust integrator for dynamical low-rank approximation
A rank-adaptive integrator for the dynamical low-rank approximation of matrix and tensor differential equations is presented. The fixed-rank integrator recently proposed by two of the authors is extended to allow for an adaptive choice of the rank, using subspaces that are generated by the integrator itself. The integrator first updates the evolving bases and then does a Galerkin step in the subspace generated by both the new and old bases, which is followed by rank truncation to a given tolerance. It is shown that the adaptive low-rank integrator retains the exactness, robustness and symmetry-preserving properties of the previously proposed fixed-rank integrator. Beyond that, up to the truncation tolerance, the rank-adaptive integrator preserves the norm when the differential equation does, it preserves the energy for Schrödinger equations and Hamiltonian systems, and it preserves the monotonic decrease of the functional in gradient flows. Numerical experiments illustrate the behaviour of the rank-adaptive integrator
Recommended from our members
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size.
This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems
Clutter Suppression in Ultrasound: Performance Evaluation of Low-Rank and Sparse Matrix Decomposition Methods
Vessel diseases are often accompanied by abnormalities related to vascular shape and size. Therefore, a clear visualization of vasculature is of high clinical significance. Ultrasound Color Flow Imaging (CFI) is one of the prominent techniques for flow visualization. However, clutter signals originating from slow-moving tissue is one of the main obstacles to obtain a clear view of the vascular network. Enhancement of the vasculature by suppressing the clutters is an essential step for many applications of ultrasound CFI. In this thesis, we focus on a state-of-art algorithm framework called Decomposition into Low-rank and Sparse Matrices (DLSM) framework for ultrasound clutter suppression.
Currently, ultrasound clutter suppression is often performed by Singular Value Decomposition (SVD) of the data matrix, which is a branch of eigen-based filtering. This approach exhibits two well-known limitations. First, the performance of SVD is sensitive to the proper manual selection of the ranks corresponding to clutter and blood subspaces. Second, SVD is prone to failure in the presence of large random noise in the data set. A potential solution to these issues is the use of DLSM framework. SVD, as a means for singular values, is also one of the widely used algorithms for solving the minimization problem under the DLSM framework. Many other algorithms under DLSM avoid full SVD and use approximated SVD or SVD-free ideas which may have better performance with higher robustness and lower computing time due to the expensive computational cost of full SVD. In practice, these models separate blood from clutter based on the assumption that steady clutter represents a low-rank structure and the moving blood component is sparse.
In this thesis, we exploit the feasibility of exploiting low-rank and sparse decomposition schemes, originally developed in the field of computer vision, in ultrasound clutter suppression. Since ultrasound images have different texture and statistical properties compared to images in computer vision, it is of high importance to evaluate how these methods translate to ultrasound CFI. We conduct this evaluation study by adapting 106 DLSM algorithms and validating them against simulation, phantom and in vivo rat data sets.
The advantage of simulation and phantom experiments is that the ground truth vessel map is known, and the advantage of the in vivo data set is that it enables us to test algorithms in a realistic setting. Two conventional quality metrics, Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR), are used for performance evaluation. In addition, computation times required by different algorithms for generating the clutter suppressed images are reported. Our extensive analysis shows that the DLSM framework can be successfully applied to ultrasound clutter suppression
Low-Rank Iterative Solvers for Large-Scale Stochastic Galerkin Linear Systems
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Dissertation, 2016von Dr. rer. pol. Akwum Agwu OnwuntaLiteraturverzeichnis: Seite 135-14
- …