955 research outputs found

    Semi-blind adaptive beamforming for high-throughput quadrature amplitude modulation systems

    No full text
    A semi-blind adaptive beamforming scheme is proposed for wireless systems that employ high-throughput quadrature amplitude modulation signalling. A minimum number of training symbols, equal to the number of receiver antenna arrays elements, are first utilised to provide a rough initial least squares estimate of the beamformer's weight vector. A concurrent constant modulus algorithm and soft decision-directed scheme is then applied to adapt the beamformer. This semi-blind adaptive beamforming scheme is capable of converging fast to the minimum mean-square-error beamforming solution, as demonstrated in our simulation study

    Adaptive minimum symbol error rate beamforming assisted receiver for quadrature amplitude modulation systems

    No full text
    An adaptive beamforming assisted receiver is proposed for multiple antenna aided multiuser systems that employ bandwidth efficient quadrature amplitude modulation (QAM). A novel minimum symbol error rate (MSER) design is proposed for the beamforming assisted receiver, where the system’s symbol error rate is directly optimized. Hence the MSER approach provides a significant symbol error ratio performance enhancement over the classic minimum mean square error design. A sample-by-sample adaptive algorithm, referred to as the least symbol error rate (LBER) technique, is derived for allowing the adaptive implementation of the system to arrive from its initial beamforming weight solution to MSER beamforming solution

    A Low-Cost Robust Distributed Linearly Constrained Beamformer for Wireless Acoustic Sensor Networks with Arbitrary Topology

    Full text link
    We propose a new robust distributed linearly constrained beamformer which utilizes a set of linear equality constraints to reduce the cross power spectral density matrix to a block-diagonal form. The proposed beamformer has a convenient objective function for use in arbitrary distributed network topologies while having identical performance to a centralized implementation. Moreover, the new optimization problem is robust to relative acoustic transfer function (RATF) estimation errors and to target activity detection (TAD) errors. Two variants of the proposed beamformer are presented and evaluated in the context of multi-microphone speech enhancement in a wireless acoustic sensor network, and are compared with other state-of-the-art distributed beamformers in terms of communication costs and robustness to RATF estimation errors and TAD errors

    A class of constant modulus algorithms for uniform linear arrays with a conjugate symmetric constraint

    Get PDF
    A class of constant modulus algorithms (CMAs) subject to a conjugate symmetric constraint is proposed for blind beamforming based on the uniform linear array structure. The constraint is derived from the beamformer with an optimum output signal-to-interference-plus-noise ratio (SINR). The effect of the additional constraint is equivalent to adding a second step to the original adaptive algorithms. The proposed approach is general and can be applied to both the traditional CMA and its all kinds of variants, such as the linearly constrained CMA (LCCMA) and the least squares CMA (LSCMA) as two examples. With this constraint, the modified CMAs will always generate a weight vector in the desired form for each update and the number of adaptive variables is effectively reduced by half, leading to a much improved overall performance. (C) 2010 Elsevier B.V. All rights reserved

    DNN-Based Multi-Frame MVDR Filtering for Single-Microphone Speech Enhancement

    Full text link
    Multi-frame approaches for single-microphone speech enhancement, e.g., the multi-frame minimum-variance-distortionless-response (MVDR) filter, are able to exploit speech correlations across neighboring time frames. In contrast to single-frame approaches such as the Wiener gain, it has been shown that multi-frame approaches achieve a substantial noise reduction with hardly any speech distortion, provided that an accurate estimate of the correlation matrices and especially the speech interframe correlation vector is available. Typical estimation procedures of the correlation matrices and the speech interframe correlation (IFC) vector require an estimate of the speech presence probability (SPP) in each time-frequency bin. In this paper, we propose to use a bi-directional long short-term memory deep neural network (DNN) to estimate a speech mask and a noise mask for each time-frequency bin, using which two different SPP estimates are derived. Aiming at achieving a robust performance, the DNN is trained for various noise types and signal-to-noise ratios. Experimental results show that the multi-frame MVDR in combination with the proposed data-driven SPP estimator yields an increased speech quality compared to a state-of-the-art model-based estimator
    corecore