52 research outputs found

    Accelerated Proximal Algorithm for Finding the Dantzig Selector and Source Separation Using Dictionary Learning

    Get PDF
    In most of the applications, signals acquired from different sensors are composite and are corrupted by some noise. In the presence of noise, separation of composite signals into its components without losing information is quite challenging. Separation of signals becomes more difficult when only a few samples of the noisy undersampled composite signals are given. In this paper, we aim to find Dantzig selector with overcomplete dictionaries using Accelerated Proximal Gradient Algorithm (APGA) for recovery and separation of undersampled composite signals. We have successfully diagnosed leukemia disease using our model and compared it with Alternating Direction Method of Multipliers (ADMM). As a test case, we have also recovered Electrocardiogram (ECG) signal with great accuracy from its noisy version using this model along with Proximity Operator based Algorithm (POA) for comparison. With less computational complexity compared with ADMM and POA, APGA has a good clustering capability depicted from the leukemia diagnosis

    Comparison of echo state network output layer classification methods on noisy data

    Full text link
    Echo state networks are a recently developed type of recurrent neural network where the internal layer is fixed with random weights, and only the output layer is trained on specific data. Echo state networks are increasingly being used to process spatiotemporal data in real-world settings, including speech recognition, event detection, and robot control. A strength of echo state networks is the simple method used to train the output layer - typically a collection of linear readout weights found using a least squares approach. Although straightforward to train and having a low computational cost to use, this method may not yield acceptable accuracy performance on noisy data. This study compares the performance of three echo state network output layer methods to perform classification on noisy data: using trained linear weights, using sparse trained linear weights, and using trained low-rank approximations of reservoir states. The methods are investigated experimentally on both synthetic and natural datasets. The experiments suggest that using regularized least squares to train linear output weights is superior on data with low noise, but using the low-rank approximations may significantly improve accuracy on datasets contaminated with higher noise levels.Comment: 8 pages. International Joint Conference on Neural Networks (IJCNN 2017

    Generalized Dantzig Selector: Application to the k-support norm

    Full text link
    We propose a Generalized Dantzig Selector (GDS) for linear models, in which any norm encoding the parameter structure can be leveraged for estimation. We investigate both computational and statistical aspects of the GDS. Based on conjugate proximal operator, a flexible inexact ADMM framework is designed for solving GDS, and non-asymptotic high-probability bounds are established on the estimation error, which rely on Gaussian width of unit norm ball and suitable set encompassing estimation error. Further, we consider a non-trivial example of the GDS using kk-support norm. We derive an efficient method to compute the proximal operator for kk-support norm since existing methods are inapplicable in this setting. For statistical analysis, we provide upper bounds for the Gaussian widths needed in the GDS analysis, yielding the first statistical recovery guarantee for estimation with the kk-support norm. The experimental results confirm our theoretical analysis.Comment: Updates to boun

    Peaceman-Rachford splitting for a class of nonconvex optimization problems

    Full text link
    We study the applicability of the Peaceman-Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically
    corecore