21,601 research outputs found

    Blind MultiChannel Identification and Equalization for Dereverberation and Noise Reduction based on Convolutive Transfer Function

    Get PDF
    This paper addresses the problems of blind channel identification and multichannel equalization for speech dereverberation and noise reduction. The time-domain cross-relation method is not suitable for blind room impulse response identification, due to the near-common zeros of the long impulse responses. We extend the cross-relation method to the short-time Fourier transform (STFT) domain, in which the time-domain impulse responses are approximately represented by the convolutive transfer functions (CTFs) with much less coefficients. The CTFs suffer from the common zeros caused by the oversampled STFT. We propose to identify CTFs based on the STFT with the oversampled signals and the critical sampled CTFs, which is a good compromise between the frequency aliasing of the signals and the common zeros problem of CTFs. In addition, a normalization of the CTFs is proposed to remove the gain ambiguity across sub-bands. In the STFT domain, the identified CTFs is used for multichannel equalization, in which the sparsity of speech signals is exploited. We propose to perform inverse filtering by minimizing the 1\ell_1-norm of the source signal with the relaxed 2\ell_2-norm fitting error between the micophone signals and the convolution of the estimated source signal and the CTFs used as a constraint. This method is advantageous in that the noise can be reduced by relaxing the 2\ell_2-norm to a tolerance corresponding to the noise power, and the tolerance can be automatically set. The experiments confirm the efficiency of the proposed method even under conditions with high reverberation levels and intense noise.Comment: 13 pages, 5 figures, 5 table

    Scanning and Sequential Decision Making for Multidimensional Data -- Part II: The Noisy Case

    Get PDF
    We consider the problem of sequential decision making for random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of scanning and sequentially filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance, and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of scanning and prediction for noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with scanning and sequential decision making on noiseless data arrays

    Controlling instabilities along a 3DVar analysis cycle by assimilating in the unstable subspace: a comparison with the EnKF

    Get PDF
    A hybrid scheme obtained by combining 3DVar with the Assimilation in the Unstable Subspace (3DVar-AUS) is tested in a QG model, under perfect model conditions, with a fixed observational network, with and without observational noise. The AUS scheme, originally formulated to assimilate adaptive observations, is used here to assimilate the fixed observations that are found in the region of local maxima of BDAS vectors (Bred vectors subject to assimilation), while the remaining observations are assimilated by 3DVar. The performance of the hybrid scheme is compared with that of 3DVar and of an EnKF. The improvement gained by 3DVar-AUS and the EnKF with respect to 3DVar alone is similar in the present model and observational configuration, while 3DVar-AUS outperforms the EnKF during the forecast stage. The 3DVar-AUS algorithm is easy to implement and the results obtained in the idealized conditions of this study encourage further investigation toward an implementation in more realistic contexts

    How well can we measure and understand foregrounds with 21 cm experiments?

    Get PDF
    Before it becomes a sensitive probe of the Epoch of Reionization, the Dark Ages, and fundamental physics, 21 cm tomography must successfully contend with the issue of foreground contamination. Broadband foreground sources are expected to be roughly four orders of magnitude larger than any cosmological signals, so precise foreground models will be necessary. Such foreground models often contain a large number of parameters, reflecting the complicated physics that governs foreground sources. In this paper, we concentrate on spectral modeling (neglecting, for instance, bright point source removal from spatial maps) and show that 21 cm tomography experiments will likely not be able to measure these parameters without large degeneracies, simply because the foreground spectra are so featureless and generic. However, we show that this is also an advantage, because it means that the foregrounds can be characterized to a high degree of accuracy once a small number of parameters (likely three or four, depending on one's instrumental specifications) are measured. This provides a simple understanding for why 21 cm foreground subtraction schemes are able to remove most of the contaminants by suppressing just a small handful of simple spectral forms. In addition, this suggests that the foreground modeling process should be relatively simple and will likely not be an impediment to the foreground subtraction schemes that are necessary for a successful 21 cm tomography experiment.Comment: 15 pages, 9 figures, 2 tables; Replaced with accepted MNRAS version (slight quantitative changes to plots and tables, no changes to any conclusions

    A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion

    Full text link
    We consider the problem of reconstructing a low-rank matrix from a small subset of its entries. In this paper, we describe the implementation of an efficient algorithm called OptSpace, based on singular value decomposition followed by local manifold optimization, for solving the low-rank matrix completion problem. It has been shown that if the number of revealed entries is large enough, the output of singular value decomposition gives a good estimate for the original matrix, so that local optimization reconstructs the correct matrix with high probability. We present numerical results which show that this algorithm can reconstruct the low rank matrix exactly from a very small subset of its entries. We further study the robustness of the algorithm with respect to noise, and its performance on actual collaborative filtering datasets.Comment: 26 pages, 15 figure
    corecore