3,731 research outputs found

    Blind adaptive constrained reduced-rank parameter estimation based on constant modulus design for CDMA interference suppression

    Get PDF
    This paper proposes a multistage decomposition for blind adaptive parameter estimation in the Krylov subspace with the code-constrained constant modulus (CCM) design criterion. Based on constrained optimization of the constant modulus cost function and utilizing the Lanczos algorithm and Arnoldi-like iterations, a multistage decomposition is developed for blind parameter estimation. A family of computationally efficient blind adaptive reduced-rank stochastic gradient (SG) and recursive least squares (RLS) type algorithms along with an automatic rank selection procedure are also devised and evaluated against existing methods. An analysis of the convergence properties of the method is carried out and convergence conditions for the reduced-rank adaptive algorithms are established. Simulation results consider the application of the proposed techniques to the suppression of multiaccess and intersymbol interference in DS-CDMA systems

    Importance sampling for stochastic programming

    Get PDF
    Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the exptected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it requires the evaluation of a multidimensional integral whose integrand is an optimization problem. In turn, the recourse function has to be estimated using techniques such as scenario trees or Monte Carlo methods, both of which require numerous function evaluations to produce accurate results for large-scale problems with multiple periods and high-dimensional uncertainty. In this thesis, we introduce an Importance Sampling framework for stochastic programming that can produce accurate estimates of the recourse function using a small number of samples. Previous approaches for importance sampling in stochastic programming were limited to problems where the uncertainty was modelled using discrete random variables, and the recourse function was additively separable in the uncertain dimensions. Our framework avoids these restrictions by pairing Markov Chain Monte Carlo methods with Kernel Density Estimation algorithms to build a non-parametric Importance Sampling distribution, which can then be used to produce a low-variance estimate of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using variants of well-known multistage stochastic programming problems. Our numerical results show that our framework produces more accurate estimates of the optimal value of stochastic programming models, especially for problems with moderate-to-high variance distributions or rare-event distributions. For example, in some applications, we found that if the random variables are drawn from a rare-event distribution, our proposed algorithm can achieve four times reduction in the mean square error and variance given by other existing methods (e.g.: SDDP with Crude Monte Carlo or SDDP with Quasi Monte Carlo method) for the same number of samples. Or when the random variables are drawn from the high variance distribution, our proposed algorithm can reduce the variance averagely by two times compared to the results obtained by other methods for approximately the same level of mean square error and a fixed number of samples. We use our proposed algorithm to solve a capacity expansion planning problem in the electric power industry. The model includes the unit commitment problem and maintenance scheduling. It allows the investors to make optimal decisions on the capacity and the type of generators to build in order to minimize the capital cost and operating cost over a long period of time. Our model computes the optimal schedule for each of the generators while meeting the demand and respecting the engineering constraints of each generator. We use an aggregation method to group generators of similar features, in order to reduce the problem size. The numerical experiment shows that by clustering the generators of the same technology with similar size together and apply the SDDP algorithm with our proposed sampling framework on this simplified formulation, we are able to solve the problem using only one fourth the amount of time to solve the original problem by conventional algorithms. The speed-up is achieved without a significant reduction in the quality of the solution.Open Acces

    Reduced-Rank STAP Schemes for Airborne Radar Based on Switched Joint Interpolation, Decimation and Filtering Algorithm

    Get PDF
    In this paper, we propose a reduced-rank space-time adaptive processing (STAP) technique for airborne phased array radar applications. The proposed STAP method performs dimensionality reduction by using a reduced-rank switched joint interpolation, decimation and filtering algorithm (RR-SJIDF). In this scheme, a multiple-processing-branch (MPB) framework, which contains a set of jointly optimized interpolation, decimation and filtering units, is proposed to adaptively process the observations and suppress jammers and clutter. The output is switched to the branch with the best performance according to the minimum variance criterion. In order to design the decimation unit, we present an optimal decimation scheme and a low-complexity decimation scheme. We also develop two adaptive implementations for the proposed scheme, one based on a recursive least squares (RLS) algorithm and the other on a constrained conjugate gradient (CCG) algorithm. The proposed adaptive algorithms are tested with simulated radar data. The simulation results show that the proposed RR-SJIDF STAP schemes with both the RLS and the CCG algorithms converge at a very fast speed and provide a considerable SINR improvement over the state-of-the-art reduced-rank schemes
    • …
    corecore