6,286 research outputs found

    A survey of the state-of-the-art and focused research in range systems

    Get PDF
    In this one-year renewal of NASA Contract No. 2-304, basic research, development, and implementation in the areas of modern estimation algorithms and digital communication systems have been performed. In the first area, basic study on the conversion of general classes of practical signal processing algorithms into systolic array algorithms is considered, producing four publications. Also studied were the finite word length effects and convergence rates of lattice algorithms, producing two publications. In the second area of study, the use of efficient importance sampling simulation technique for the evaluation of digital communication system performances were studied, producing two publications

    Covariance matrix estimation with heterogeneous samples

    Get PDF
    We consider the problem of estimating the covariance matrix Mp of an observation vector, using heterogeneous training samples, i.e., samples whose covariance matrices are not exactly Mp. More precisely, we assume that the training samples can be clustered into K groups, each one containing Lk, snapshots sharing the same covariance matrix Mk. Furthermore, a Bayesian approach is proposed in which the matrices Mk. are assumed to be random with some prior distribution. We consider two different assumptions for Mp. In a fully Bayesian framework, Mp is assumed to be random with a given prior distribution. Under this assumption, we derive the minimum mean-square error (MMSE) estimator of Mp which is implemented using a Gibbs-sampling strategy. Moreover, a simpler scheme based on a weighted sample covariance matrix (SCM) is also considered. The weights minimizing the mean square error (MSE) of the estimated covariance matrix are derived. Furthermore, we consider estimators based on colored or diagonal loading of the weighted SCM, and we determine theoretically the optimal level of loading. Finally, in order to relax the a priori assumptions about the covariance matrix Mp, the second part of the paper assumes that this matrix is deterministic and derives its maximum-likelihood estimator. Numerical simulations are presented to illustrate the performance of the different estimation schemes

    A new integral representation for quasiperiodic fields and its application to two-dimensional band structure calculations

    Full text link
    In this paper, we consider band-structure calculations governed by the Helmholtz or Maxwell equations in piecewise homogeneous periodic materials. Methods based on boundary integral equations are natural in this context, since they discretize the interface alone and can achieve high order accuracy in complicated geometries. In order to handle the quasi-periodic conditions which are imposed on the unit cell, the free-space Green's function is typically replaced by its quasi-periodic cousin. Unfortunately, the quasi-periodic Green's function diverges for families of parameter values that correspond to resonances of the empty unit cell. Here, we bypass this problem by means of a new integral representation that relies on the free-space Green's function alone, adding auxiliary layer potentials on the boundary of the unit cell itself. An important aspect of our method is that by carefully including a few neighboring images, the densities may be kept smooth and convergence rapid. This framework results in an integral equation of the second kind, avoids spurious resonances, and achieves spectral accuracy. Because of our image structure, inclusions which intersect the unit cell walls may be handled easily and automatically. Our approach is compatible with fast-multipole acceleration, generalizes easily to three dimensions, and avoids the complication of divergent lattice sums.Comment: 25 pages, 6 figures, submitted to J. Comput. Phy

    Empirical Bayes selection of wavelet thresholds

    Full text link
    This paper explores a class of empirical Bayes methods for level-dependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavy-tailed density. The mixing weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold. Details of the calculations needed for implementing the procedure are included. In practice, the estimates are quick to compute and there is software available. Simulations on the standard model functions show excellent performance, and applications to data drawn from various fields of application are used to explore the practical performance of the approach. By using a general result on the risk of the corresponding marginal maximum likelihood approach for a single sequence, overall bounds on the risk of the method are found subject to membership of the unknown function in one of a wide range of Besov classes, covering also the case of f of bounded variation. The rates obtained are optimal for any value of the parameter p in (0,\infty], simultaneously for a wide range of loss functions, each dominating the L_q norm of the \sigmath derivative, with \sigma\ge0 and 0<q\le2.Comment: Published at http://dx.doi.org/10.1214/009053605000000345 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    Get PDF
    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction

    Adaptive antenna array beamforming using a concatenation of recursive least square and least mean square algorithms

    Get PDF
    In recent years, adaptive or smart antennas have become a key component for various wireless applications, such as radar, sonar and cellular mobile communications including worldwide interoperability for microwave access (WiMAX). They lead to an increase in the detection range of radar and sonar systems, and the capacity of mobile radio communication systems. These antennas are used as spatial filters for receiving the desired signals coming from a specific direction or directions, while minimizing the reception of unwanted signals emanating from other directions.Because of its simplicity and robustness, the LMS algorithm has become one of the most popular adaptive signal processing techniques adopted in many applications, including antenna array beamforming. Over the last three decades, several improvements have been proposed to speed up the convergence of the LMS algorithm. These include the normalized-LMS (NLMS), variable-length LMS algorithm, transform domain algorithms, and more recently the constrained-stability LMS (CSLMS) algorithm and modified robust variable step size LMS (MRVSS) algorithm. Yet another approach for attempting to speed up the convergence of the LMS algorithm without having to sacrifice too much of its error floor performance, is through the use of a variable step size LMS (VSSLMS) algorithm. All the published VSSLMS algorithms make use of an initial large adaptation step size to speed up the convergence. Upon approaching the steady state, smaller step sizes are then introduced to decrease the level of adjustment, hence maintaining a lower error floor. This convergence improvement of the LMS algorithm increases its complexity from 2N in the case of LMS algorithm to 9N in the case of the MRVSS algorithm, where N is the number of array elements.An alternative to the LMS algorithm is the RLS algorithm. Although higher complexity is required for the RLS algorithm compared to the LMS algorithm, it can achieve faster convergence, thus, better performance compared to the LMS algorithm. There are also improvements that have been made to the RLS algorithm families to enhance tracking ability as well as stability. Examples are, the adaptive forgetting factor RLS algorithm (AFF-RLS), variable forgetting factor RLS (VFFRLS) and the extended recursive least squares (EX-KRLS) algorithm. The multiplication complexity of VFFRLS, AFF-RLS and EX-KRLS algorithms are 2.5N2 + 3N + 20 , 9N2 + 7N , and 15N3 + 7N2 + 2N + 4 respectively, while the RLS algorithm requires 2.5N2 + 3N .All the above well known algorithms require an accurate reference signal for their proper operation. In some cases, several additional operating parameters should be specified. For example, MRVSS needs twelve predefined parameters. As a result, its performance highly depends on the input signal.In this study, two adaptive beamforming algorithms have been proposed. They are called recursive least square - least mean square (RLMS) algorithm, and least mean square - least mean square (LLMS) algorithm. These algorithms have been proposed for meeting future beamforming requirements, such as very high convergence rate, robust to noise and flexible modes of operation. The RLMS algorithm makes use of two individual algorithm stages, based on the RLS and LMS algorithms, connected in tandem via an array image vector. On the other hand, the LLMS algorithm is a simpler version of the RLMS algorithm. It makes use of two LMS algorithm stages instead of the RLS – LMS combination as used in the RLMS algorithm.Unlike other adaptive beamforming algorithms, for both of these algorithms, the error signal of the second algorithm stage is fed back and combined with the error signal of the first algorithm stage to form an overall error signal for use update the tap weights of the first algorithm stage.Upon convergence, usually after few iterations, the proposed algorithms can be switched to the self-referencing mode. In this mode, the entire algorithm outputs are swapped, replacing their reference signals. In moving target applications, the array image vector, F, should also be updated to the new position. This scenario is also studied for both proposed algorithms. A simple and effective method for calculate the required array image vector is also proposed. Moreover, since the RLMS and the LLMS algorithms employ the array image vector in their operation, they can be used to generate fixed beams by pre-setting the values of the array image vector to the specified direction.The convergence of RLMS and LLMS algorithms is analyzed for two different operation modes; namely with external reference or self-referencing. Array image vector calculations, ranges of step sizes values for stable operation, fixed beam generation, and fixed-point arithmetic have also been studied in this thesis. All of these analyses have been confirmed by computer simulations for different signal conditions. Computer simulation results show that both proposed algorithms are superior in convergence performances to the algorithms, such as the CSLMS, MRVSS, LMS, VFFRLS and RLS algorithms, and are quite insensitive to variations in input SNR and the actual step size values used. Furthermore, RLMS and LLMS algorithms remain stable even when their reference signals are corrupted by additive white Gaussian noise (AWGN). In addition, they are robust when operating in the presence of Rayleigh fading. Finally, the fidelity of the signal at the output of the proposed algorithms beamformers is demonstrated by means of the resultant values of error vector magnitude (EVM), and scatter plots. It is also shown that, the implementation of an eight element uniform linear array using the proposed algorithms with a wordlength of nine bits is sufficient to achieve performance close to that provided by full precision
    corecore