53,568 research outputs found

    RLS Adaptive Filtering Algorithms Based on Parallel Computations

    Get PDF
    The paper presents a family of the sliding window RLS adaptive filtering algorithms with the regularization of adaptive filter correlation matrix. The algorithms are developed in forms, fitted to the implementation by means of parallel computations. The family includes RLS and fast RLS algorithms based on generalized matrix inversion lemma, fast RLS algorithms based on square root free inverse QR decomposition and linearly constrained RLS algorithms. The considered algorithms are mathematically identical to the appropriate algorithms with sequential computations. The computation procedures of the developed algorithms are presented. The results of the algorithm simulation are presented as well

    Exploring the Optimized Value of Each Hyperparameter in Various Gradient Descent Algorithms

    Full text link
    In the recent years, various gradient descent algorithms including the methods of gradient descent, gradient descent with momentum, adaptive gradient (AdaGrad), root-mean-square propagation (RMSProp) and adaptive moment estimation (Adam) have been applied to the parameter optimization of several deep learning models with higher accuracies or lower errors. These optimization algorithms may need to set the values of several hyperparameters which include a learning rate, momentum coefficients, etc. Furthermore, the convergence speed and solution accuracy may be influenced by the values of hyperparameters. Therefore, this study proposes an analytical framework to use mathematical models for analyzing the mean error of each objective function based on various gradient descent algorithms. Moreover, the suitable value of each hyperparameter could be determined by minimizing the mean error. The principles of hyperparameter value setting have been generalized based on analysis results for model optimization. The experimental results show that higher efficiency convergences and lower errors can be obtained by the proposed method.Comment: in Chinese languag

    Computing the eigenvalues of symmetric H2-matrices by slicing the spectrum

    Get PDF
    The computation of eigenvalues of large-scale matrices arising from finite element discretizations has gained significant interest in the last decade. Here we present a new algorithm based on slicing the spectrum that takes advantage of the rank structure of resolvent matrices in order to compute m eigenvalues of the generalized symmetric eigenvalue problem in O(nmlogαn)\mathcal{O}(n m \log^\alpha n) operations, where α>0\alpha>0 is a small constant

    Robustness maximization of parallel multichannel systems

    Get PDF
    Bit error rate (BER) minimization and SNR-gap maximization, two robustness optimization problems, are solved, under average power and bit-rate constraints, according to the waterfilling policy. Under peak-power constraint the solutions differ and this paper gives bit-loading solutions of both robustness optimization problems over independent parallel channels. The study is based on analytical approach with generalized Lagrangian relaxation tool and on greedy-type algorithm approach. Tight BER expressions are used for square and rectangular quadrature amplitude modulations. Integer bit solution of analytical continuous bit-rates is performed with a new generalized secant method. The asymptotic convergence of both robustness optimizations is proved for both analytical and algorithmic approaches. We also prove that, in conventional margin maximization problem, the equivalence between SNR-gap maximization and power minimization does not hold with peak-power limitation. Based on a defined dissimilarity measure, bit-loading solutions are compared over power line communication channel for multicarrier systems. Simulation results confirm the asymptotic convergence of both allocation policies. In non asymptotic regime the allocation policies can be interchanged depending on the robustness measure and the operating point of the communication system. The low computational effort of the suboptimal solution based on analytical approach leads to a good trade-off between performance and complexity.Comment: 27 pages, 8 figures, submitted to IEEE Trans. Inform. Theor

    Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains

    Get PDF
    In this paper, we consider comparison-based adaptive stochastic algorithms for solving numerical optimisation problems. We consider a specific subclass of algorithms that we call comparison-based step-size adaptive randomized search (CB-SARS), where the state variables at a given iteration are a vector of the search space and a positive parameter, the step-size, typically controlling the overall standard deviation of the underlying search distribution.We investigate the linear convergence of CB-SARS on\emph{scaling-invariant} objective functions. Scaling-invariantfunctions preserve the ordering of points with respect to their functionvalue when the points are scaled with the same positive parameter (thescaling is done w.r.t. a fixed reference point). This class offunctions includes norms composed with strictly increasing functions aswell as many non quasi-convex and non-continuousfunctions. On scaling-invariant functions, we show the existence of ahomogeneous Markov chain, as a consequence of natural invarianceproperties of CB-SARS (essentially scale-invariance and invariance tostrictly increasing transformation of the objective function). We thenderive sufficient conditions for \emph{global linear convergence} ofCB-SARS, expressed in terms of different stability conditions of thenormalised homogeneous Markov chain (irreducibility, positivity, Harrisrecurrence, geometric ergodicity) and thus define a general methodologyfor proving global linear convergence of CB-SARS algorithms onscaling-invariant functions. As a by-product we provide aconnexion between comparison-based adaptive stochasticalgorithms and Markov chain Monte Carlo algorithms.Comment: SIAM Journal on Optimization, Society for Industrial and Applied Mathematics, 201

    No penalty no tears: Least squares in high-dimensional linear models

    Get PDF
    Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.Comment: Added results for non-sparse models; Added results for elliptical distribution; Added simulations for adaptive lass

    A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions

    Get PDF
    Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix
    corecore