19,958 research outputs found

    Blind MultiChannel Identification and Equalization for Dereverberation and Noise Reduction based on Convolutive Transfer Function

    Get PDF
    This paper addresses the problems of blind channel identification and multichannel equalization for speech dereverberation and noise reduction. The time-domain cross-relation method is not suitable for blind room impulse response identification, due to the near-common zeros of the long impulse responses. We extend the cross-relation method to the short-time Fourier transform (STFT) domain, in which the time-domain impulse responses are approximately represented by the convolutive transfer functions (CTFs) with much less coefficients. The CTFs suffer from the common zeros caused by the oversampled STFT. We propose to identify CTFs based on the STFT with the oversampled signals and the critical sampled CTFs, which is a good compromise between the frequency aliasing of the signals and the common zeros problem of CTFs. In addition, a normalization of the CTFs is proposed to remove the gain ambiguity across sub-bands. In the STFT domain, the identified CTFs is used for multichannel equalization, in which the sparsity of speech signals is exploited. We propose to perform inverse filtering by minimizing the 1\ell_1-norm of the source signal with the relaxed 2\ell_2-norm fitting error between the micophone signals and the convolution of the estimated source signal and the CTFs used as a constraint. This method is advantageous in that the noise can be reduced by relaxing the 2\ell_2-norm to a tolerance corresponding to the noise power, and the tolerance can be automatically set. The experiments confirm the efficiency of the proposed method even under conditions with high reverberation levels and intense noise.Comment: 13 pages, 5 figures, 5 table

    Multichannel sparse recovery of complex-valued signals using Huber's criterion

    Full text link
    In this paper, we generalize Huber's criterion to multichannel sparse recovery problem of complex-valued measurements where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. This requires careful characterization of robust complex-valued loss functions as well as Huber's criterion function for the multivariate sparse regression problem. We devise a greedy algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Unlike the conventional SNIHT method, our algorithm, referred to as HUB-SNIHT, is robust under heavy-tailed non-Gaussian noise conditions, yet has a negligible performance loss compared to SNIHT under Gaussian noise. Usefulness of the method is illustrated in source localization application with sensor arrays.Comment: To appear in CoSeRa'15 (Pisa, Italy, June 16-19, 2015). arXiv admin note: text overlap with arXiv:1502.0244

    Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization

    Get PDF
    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on the methodology used to combine multichannel signals. Indeed, the two prevailing methods for multichannel signal combination lead to Rician and noncentral Chi noise distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in brain data

    On the Sample Complexity of Multichannel Frequency Estimation via Convex Optimization

    Full text link
    The use of multichannel data in line spectral estimation (or frequency estimation) is common for improving the estimation accuracy in array processing, structural health monitoring, wireless communications, and more. Recently proposed atomic norm methods have attracted considerable attention due to their provable superiority in accuracy, flexibility and robustness compared with conventional approaches. In this paper, we analyze atomic norm minimization for multichannel frequency estimation from noiseless compressive data, showing that the sample size per channel that ensures exact estimation decreases with the increase of the number of channels under mild conditions. In particular, given LL channels, order K(logK)(1+1LlogN)K\left(\log K\right) \left(1+\frac{1}{L}\log N\right) samples per channel, selected randomly from NN equispaced samples, suffice to ensure with high probability exact estimation of KK frequencies that are normalized and mutually separated by at least 4N\frac{4}{N}. Numerical results are provided corroborating our analysis.Comment: 14 pages, double column, to appear in IEEE Trans. Information Theor

    Robust equalization of multichannel acoustic systems

    Get PDF
    In most real-world acoustical scenarios, speech signals captured by distant microphones from a source are reverberated due to multipath propagation, and the reverberation may impair speech intelligibility. Speech dereverberation can be achieved by equalizing the channels from the source to microphones. Equalization systems can be computed using estimates of multichannel acoustic impulse responses. However, the estimates obtained from system identification always include errors; the fact that an equalization system is able to equalize the estimated multichannel acoustic system does not mean that it is able to equalize the true system. The objective of this thesis is to propose and investigate robust equalization methods for multichannel acoustic systems in the presence of system identification errors. Equalization systems can be computed using the multiple-input/output inverse theorem or multichannel least-squares method. However, equalization systems obtained from these methods are very sensitive to system identification errors. A study of the multichannel least-squares method with respect to two classes of characteristic channel zeros is conducted. Accordingly, a relaxed multichannel least- squares method is proposed. Channel shortening in connection with the multiple- input/output inverse theorem and the relaxed multichannel least-squares method is discussed. Two algorithms taking into account the system identification errors are developed. Firstly, an optimally-stopped weighted conjugate gradient algorithm is proposed. A conjugate gradient iterative method is employed to compute the equalization system. The iteration process is stopped optimally with respect to system identification errors. Secondly, a system-identification-error-robust equalization method exploring the use of error models is presented, which incorporates system identification error models in the weighted multichannel least-squares formulation

    Calibrating spectral estimation for the LISA Technology Package with multichannel synthetic noise generation

    Full text link
    The scientific objectives of the Lisa Technology Package (LTP) experiment, on board of the LISA Pathfinder mission, demand for an accurate calibration and validation of the data analysis tools in advance of the mission launch. The levels of confidence required on the mission outcomes can be reached only with an intense activity on synthetically generated data. A flexible procedure allowing the generation of cross-correlated stationary noise time series was set-up. Multi-channel time series with the desired cross correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure is the synthesis of a noise coloring multichannel filter through a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a Z-domain fit. The common problem of initial transients in noise time series is solved with a proper initialization of the filter recursive equations. The noise generator performances were tested in a two dimensional case study of the LTP dynamics along the two principal channels of the sensing interferometer.Comment: Accepted for publication in Physical Review D (http://prd.aps.org/

    Nonparametric Simultaneous Sparse Recovery: an Application to Source Localization

    Full text link
    We consider multichannel sparse recovery problem where the objective is to find good recovery of jointly sparse unknown signal vectors from the given multiple measurement vectors which are different linear combinations of the same known elementary vectors. Many popular greedy or convex algorithms perform poorly under non-Gaussian heavy-tailed noise conditions or in the face of outliers. In this paper, we propose the usage of mixed p,q\ell_{p,q} norms on data fidelity (residual matrix) term and the conventional 0,2\ell_{0,2}-norm constraint on the signal matrix to promote row-sparsity. We devise a greedy pursuit algorithm based on simultaneous normalized iterative hard thresholding (SNIHT) algorithm. Simulation studies highlight the effectiveness of the proposed approaches to cope with different noise environments (i.i.d., row i.i.d, etc) and outliers. Usefulness of the methods are illustrated in source localization application with sensor arrays.Comment: Paper appears in Proc. European Signal Processing Conference (EUSIPCO'15), Nice, France, Aug 31 -- Sep 4, 201

    Compressive Source Separation: Theory and Methods for Hyperspectral Imaging

    Get PDF
    With the development of numbers of high resolution data acquisition systems and the global requirement to lower the energy consumption, the development of efficient sensing techniques becomes critical. Recently, Compressed Sampling (CS) techniques, which exploit the sparsity of signals, have allowed to reconstruct signal and images with less measurements than the traditional Nyquist sensing approach. However, multichannel signals like Hyperspectral images (HSI) have additional structures, like inter-channel correlations, that are not taken into account in the classical CS scheme. In this paper we exploit the linear mixture of sources model, that is the assumption that the multichannel signal is composed of a linear combination of sources, each of them having its own spectral signature, and propose new sampling schemes exploiting this model to considerably decrease the number of measurements needed for the acquisition and source separation. Moreover, we give theoretical lower bounds on the number of measurements required to perform reconstruction of both the multichannel signal and its sources. We also proposed optimization algorithms and extensive experimentation on our target application which is HSI, and show that our approach recovers HSI with far less measurements and computational effort than traditional CS approaches.Comment: 32 page

    Multichannel Sampling of Pulse Streams at the Rate of Innovation

    Full text link
    We consider minimal-rate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on sub-Nyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimal-rate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches

    Smart helmet: wearable multichannel ECG & EEG

    Get PDF
    Modern wearable technologies have enabled continuous recording of vital signs, however, for activities such as cycling, motor-racing, or military engagement, a helmet with embedded sensors would provide maximum convenience and the opportunity to monitor simultaneously both the vital signs and the electroencephalogram (EEG). To this end, we investigate the feasibility of recording the electrocardiogram (ECG), respiration, and EEG from face-lead locations, by embedding multiple electrodes within a standard helmet. The electrode positions are at the lower jaw, mastoids, and forehead, while for validation purposes a respiration belt around the thorax and a reference ECG from the chest serve as ground truth to assess the performance. The within-helmet EEG is verified by exposing the subjects to periodic visual and auditory stimuli and screening the recordings for the steady-state evoked potentials in response to these stimuli. Cycling and walking are chosen as real-world activities to illustrate how to deal with the so-induced irregular motion artifacts, which contaminate the recordings. We also propose a multivariate R-peak detection algorithm suitable for such noisy environments. Recordings in real-world scenarios support a proof of concept of the feasibility of recording vital signs and EEG from the proposed smart helmet
    corecore