8 research outputs found

    Exact algorithms for L1L^1-TV regularization of real-valued or circle-valued signals

    Full text link
    We consider L1L^1-TV regularization of univariate signals with values on the real line or on the unit circle. While the real data space leads to a convex optimization problem, the problem is non-convex for circle-valued data. In this paper, we derive exact algorithms for both data spaces. A key ingredient is the reduction of the infinite search spaces to a finite set of configurations, which can be scanned by the Viterbi algorithm. To reduce the computational complexity of the involved tabulations, we extend the technique of distance transforms to non-uniform grids and to the circular data space. In total, the proposed algorithms have complexity O(KN)\mathscr{O}(KN) where NN is the length of the signal and KK is the number of different values in the data set. In particular, the complexity is O(N)\mathscr{O}(N) for quantized data. It is the first exact algorithm for TV regularization with circle-valued data, and it is competitive with the state-of-the-art methods for scalar data, assuming that the latter are quantized

    Fast Multiscale Functional Estimation in Optimal EMG Placement for Robotic Prosthesis Controllers

    Full text link
    Electrocardiogram (EMG) signals play a significant role in decoding muscle contraction information for robotic hand prosthesis controllers. Widely applied decoders require large amount of EMG signals sensors, resulting in complicated calculations and unsatisfactory predictions. By the biomechanical process of single degree-of-freedom human hand movements, only several EMG signals are essential for accurate predictions. Recently, a novel predictor of hand movements adopts a multistage Sequential, Adaptive Functional Estimation (SAFE) method based on historical Functional Linear Model (FLM) to select important EMG signals and provide precise projections. However, SAFE repeatedly performs matrix-vector multiplications with a dense representation matrix of the integral operator for the FLM, which is computational expansive. Noting that with a properly chosen basis, the representation of the integral operator concentrates on a few bands of the basis, the goal of this study is to develop a fast Multiscale SAFE (MSAFE) method aiming at reducing computational costs while preserving (or even improving) the accuracy of the original SAFE method. Specifically, a multiscale piecewise polynomial basis is adopted to discretize the integral operator for the FLM, resulting in an approximately sparse representation matrix, and then the matrix is truncated to a sparse one. This approach not only accelerates computations but also improves robustness against noises. When applied to real hand movement data, MSAFE saves 85%∼\sim90% computing time compared with SAFE, while producing better sensor selection and comparable accuracy. In a simulation study, MSAFE shows stronger stability in sensor selection and prediction accuracy against correlated noise than SAFE

    Sparsity Promoting Regularization for Effective Noise Suppression in SPECT Image Reconstruction

    Get PDF
    The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, Single-Photon Emission Computed Tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a Preconditioned Fixed-point Proximity Algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O (1/k) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: Contrast-to-Noise Ratio (CNR), Ensemble Variance Images (EVI), Background Ensemble Noise (BEN), Normalized Mean-Square Error (NMSE), and Channelized Hotelling Observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing staircase artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging

    Implicit Fixed-point Proximity Framework for Optimization Problems and Its Applications

    Get PDF
    A variety of optimization problems especially in the field of image processing are not differentiable in nature. The non-differentiability of the objective functions together with the large dimension of the underlying images makes minimizing the objective function theoretically challenging and numerically difficult. The fixed-point proximity framework that we will systematically study in this dissertation provides a direct and unified methodology for finding solutions to those optimization problems. The framework approaches the models arising from applications straightforwardly by using various fixed point techniques as well as convex analysis tools such as the subdifferential and proximity operator. With the notion of proximity operator, we can convert those optimization problems into finding fixed points of nonlinear operators. Under the fixed-point proximity framework, these fixed point problems are often solved through iterative schemes in which each iteration can be computed in an explicit form. We further explore this fixed point formulation, and develop implicit iterative schemes for finding fixed points of nonlinear operators associated with the underlying problems, with the goal of relaxing restrictions in the development of solving the fixed point equations. Theoretical analysis is provided for the convergence of implicit algorithms proposed under the framework. The numerical experiments on image reconstruction models demonstrate that the proposed implicit fixed-point proximity algorithms work well in comparison with existing explicit fixed-point proximity algorithms in terms of the consumed computational time and accuracy of the solutions
    corecore