345 research outputs found

    Adaptive Alternating Minimization Algorithms

    Full text link
    The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in such an adaptive setting. We present applications of our results to adaptive decomposition of mixtures, adaptive log-optimal portfolio selection, and adaptive filter design.Comment: 12 pages, to appear in IEEE Transactions on Information Theor

    Alternating minimization algorithms for graph regularized tensor completion

    Full text link
    We consider a low-rank tensor completion (LRTC) problem which aims to recover a tensor from incomplete observations. LRTC plays an important role in many applications such as signal processing, computer vision, machine learning, and neuroscience. A widely used approach is to combine the tensor completion data fitting term with a regularizer based on a convex relaxation of the multilinear ranks of the tensor. For the data fitting function, we model the tensor variable by using the Canonical Polyadic (CP) decomposition and for the low-rank promoting regularization function, we consider a graph Laplacian-based function which exploits correlations between the rows of the matrix unfoldings. For solving our LRTC model, we propose an efficient alternating minimization algorithm. Furthermore, based on the Kurdyka-{\L}ojasiewicz property, we show that the sequence generated by the proposed algorithm globally converges to a critical point of the objective function. Besides, an alternating direction method of multipliers algorithm is also developed for the LRTC model. Extensive numerical experiments on synthetic and real data indicate that the proposed algorithms are effective and efficient

    Alternating Minimization Algorithms for Dual-Energy X-Ray CT Imaging and Information Optimization

    Get PDF
    This dissertation contributes toward solutions to two distinct problems linked through the use of common information optimization methods. The first problem is the X-ray computed tomography (CT) imaging problem and the second is the computation of Berger-Tung bounds for the lossy distributed source coding problem. The first problem discussed through most of the dissertation is motivated by applications in radiation oncology, including dose prediction in proton therapy and brachytherapy. In proton therapy dose prediction, the stopping power calculation is based on estimates of the electron density and mean excitation energy. In turn, the estimates of the linear attenuation coefficients or the component images from dual-energy CT image reconstruction are used to estimate the electron density and mean excitation. Therefore, the quantitative accuracy of the estimates of the linear attenuation coefficients or the component images affects the accuracy of proton therapy dose prediction. In brachytherapy, photons with low energies (approximately 20 keV) are often used for internal treatment. Those photons are attenuated through their interactions with tissues. The dose distribution in the tissue obeys an exponential decay with the linear attenuation coefficient as the parameter in the exponential. Therefore, the accuracy of the estimates of the linear attenuation coefficients at low energy levels has strong influence on dose prediction in brachytherapy. Numerical studies of the regularized alternating minimization (DE-AM) algorithm with different regularization parameters were performed to find ranges of the parameters that can achieve the desired image quality in terms of estimation accuracy and image smoothness. The DE-AM algorithm is an extension of the AM algorithm proposed by O\u27Sullivan and Benac. Both simulated data and real data reconstructions, as well as system bias and variance experiments, were carried out to demonstrate that the DE-AM algorithm is incapable of reconstructing a high density material accurately with a limited number of iterations (1000 iterations with 33 ordered subsets). This slow convergence phenomenon was then studied via a toy. or scaled-down problem, indicating a highly ridged objective function. Motivated by the studies which demonstrate the slow convergence of the DE-AM algorithm, a new algorithm, the linear integral alternating minimization (LIAM) algorithm was developed, which estimates the linear integrals of the component images first; then the component images can be recovered by an expectation-maximization (EM) algorithm or linear regression methods. Both simulated and real data were reconstructed by the LIAM algorithm while varying the regularization parameters to ascertain good choices ( &delta= 500; &lambda= 50 for I0 = 100000 scenario). The results from the DE-AM algorithm applied to the same data were used for comparison. While using only 1/10 of the computation time of the DE-AM algorithm, the LIAM algorithm achieves at least a two-fold improvement in the relative absolute error of the component images in the presence of Poisson noise. This work also explored the reconstruction of image differences from tomographic Poisson data. An alternating minimization algorithm was developed and monotonic decrease in the objective function was achieved for each iteration. Simulations with random images and tomographic data were presented to demonstrate that the algorithm can recover the difference images with 100% accuracy in the number of and identity of pixels which differ. An extension to 4D CT with simulated tomographic data was also presented and an approach to 4D PET was described. Different approaches for X-ray adaptive sensing were also proposed and reconstructions of simulated data were computed to test these approaches. Early simulation results show improved image reconstruction performance in terms of normalized L2 norm error compared to a non-adaptive sensing method. For the second problem, an optimization and computational approach was described for characterizing the inner and outer bounds for the achievable rate regions for distributed source coding, known as Berger-Tung inner and outer bounds. Several two-variable examples were presented to demonstrate the computational capability of the algorithm. For each problem considered that has a sum of distortions on the encoded variables, the inner and outer bound regions coincided. For a problem defined by Wagner and Anantharam with a single joint distortion for the two variables, their gap was observed in our results. These boundary regions can motivate hypothesized optimal distributions which can be tested in the first order necessary conditions for the optimal distributions

    On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms

    Full text link
    Building upon recent works on linesearch-free adaptive proximal gradient methods, this paper proposes AdaPGπ,r^{\pi,r}, a framework that unifies and extends existing results by providing larger stepsize policies and improved lower bounds. Different choices of the parameters π\pi and rr are discussed and the efficacy of the resulting methods is demonstrated through numerical simulations. In an attempt to better understand the underlying theory, its convergence is established in a more general setting that allows for time-varying parameters. Finally, an adaptive alternating minimization algorithm is presented by exploring the dual setting. This algorithm not only incorporates additional adaptivity, but also expands its applicability beyond standard strongly convex settings
    corecore