9,135 research outputs found

    Calibrating spectral estimation for the LISA Technology Package with multichannel synthetic noise generation

    Full text link
    The scientific objectives of the Lisa Technology Package (LTP) experiment, on board of the LISA Pathfinder mission, demand for an accurate calibration and validation of the data analysis tools in advance of the mission launch. The levels of confidence required on the mission outcomes can be reached only with an intense activity on synthetically generated data. A flexible procedure allowing the generation of cross-correlated stationary noise time series was set-up. Multi-channel time series with the desired cross correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure is the synthesis of a noise coloring multichannel filter through a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a Z-domain fit. The common problem of initial transients in noise time series is solved with a proper initialization of the filter recursive equations. The noise generator performances were tested in a two dimensional case study of the LTP dynamics along the two principal channels of the sensing interferometer.Comment: Accepted for publication in Physical Review D (http://prd.aps.org/

    Multichannel Speech Separation and Enhancement Using the Convolutive Transfer Function

    Get PDF
    This paper addresses the problem of speech separation and enhancement from multichannel convolutive and noisy mixtures, \emph{assuming known mixing filters}. We propose to perform the speech separation and enhancement task in the short-time Fourier transform domain, using the convolutive transfer function (CTF) approximation. Compared to time-domain filters, CTF has much less taps, consequently it has less near-common zeros among channels and less computational complexity. The work proposes three speech-source recovery methods, namely: i) the multichannel inverse filtering method, i.e. the multiple input/output inverse theorem (MINT), is exploited in the CTF domain, and for the multi-source case, ii) a beamforming-like multichannel inverse filtering method applying single source MINT and using power minimization, which is suitable whenever the source CTFs are not all known, and iii) a constrained Lasso method, where the sources are recovered by minimizing the â„“1\ell_1-norm to impose their spectral sparsity, with the constraint that the â„“2\ell_2-norm fitting cost, between the microphone signals and the mixing model involving the unknown source signals, is less than a tolerance. The noise can be reduced by setting a tolerance onto the noise power. Experiments under various acoustic conditions are carried out to evaluate the three proposed methods. The comparison between them as well as with the baseline methods is presented.Comment: Submitted to IEEE/ACM Transactions on Audio, Speech and Language Processin

    Detection of signals by weighted integrate-and-dump filter

    Get PDF
    A Weighted Integrate and Dump Filter (WIDF) is presented that results in reducing those losses in telemetry symbol signal to noise ratio (SNR) which occur in digital Integrate and Dump Filters (IDFs) when the samples are not phase locked to the input data symbol clock. The Minimum Mean Square Error (MMSE) criterion is used to derive a set of weights for approximating the analog integrate and dump filter, which is the matched filter for detection of signals in additive white Gaussian noise. This new digital matched filter results in considerable performance improvement compared to unweighted digital matched filters. An example is presented for a sampling rate of four times the symbol rate. As the sampling offset (or phase) varies with respect to the data symbol boundaries, the output SNR varies 1 dB for an unweighted IDF, but only 0.3 dB for the optimum WIDF, averaged over random data patterns. This improvement in performance relative to unweighted IDF means that significantly lower sampling and processing rates can be used for given telemetry symbol rates, resulting in reduced system cost

    Sub-Nyquist Channel Estimation over IEEE 802.11ad Link

    Full text link
    Nowadays, millimeter-wave communication centered at the 60 GHz radio frequency band is increasingly the preferred technology for near-field communication since it provides transmission bandwidth that is several GHz wide. The IEEE 802.11ad standard has been developed for commercial wireless local area networks in the 60 GHz transmission environment. Receivers designed to process IEEE 802.11ad waveforms employ very high rate analog-to-digital converters, and therefore, reducing the receiver sampling rate can be useful. In this work, we study the problem of low-rate channel estimation over the IEEE 802.11ad 60 GHz communication link by harnessing sparsity in the channel impulse response. In particular, we focus on single carrier modulation and exploit the special structure of the 802.11ad waveform embedded in the channel estimation field of its single carrier physical layer frame. We examine various sub-Nyquist sampling methods for this problem and recover the channel using compressed sensing techniques. Our numerical experiments show feasibility of our procedures up to one-seventh of the Nyquist rates with minimal performance deterioration.Comment: 5 pages, 5 figures, SampTA 2017 conferenc

    The sensitivity of a very long baseline interferometer

    Get PDF
    The theoretical sensitivity of various methods of acquiring and processing interferometer data are compared. It is shown that for a fixed digital recording capacity one bit quantization of single sideband data filtered with a rectangular bandpass and sampled at the Nyquist rate yields the optimum signal to noise ratio. The losses which result from imperfect bandpass, poor image rejection, approximate methods of fringe rotation, fractional bit correction, and loss of quadrature are discussed. Also discussed is the use of the complex delay function as a maximum likelihood fringe estimator

    Validating Stereoscopic Volume Rendering

    Get PDF
    The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results. In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images. The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings. Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images

    Statistical Fourier Analysis: Clarifications and Interpretations

    Get PDF
    This paper expounds some of the results of Fourier theory that are essential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to antialiasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statistical and econometric time series than is commonly recognised.

    On planetary mass determination in the case of super-Earths orbiting active stars. The case of the CoRoT-7 system

    Full text link
    This investigation uses the excellent HARPS radial velocity measurements of CoRoT-7 to re-determine the planet masses and to explore techniques able to determine mass and elements of planets discovered around active stars when the relative variation of the radial velocity due to the star activity cannot be considered as just noise and can exceed the variation due to the planets. The main technique used here is a self-consistent version of the high-pass filter used by Queloz et al. (2009) in the first mass determination of CoRoT-7b and CoRoT-7c. The results are compared to those given by two alternative techniques: (1) The approach proposed by Hatzes et al. (2010) using only those nights in which 2 or 3 observations were done; (2) A pure Fourier analysis. In all cases, the eccentricities are taken equal to zero as indicated by the study of the tidal evolution of the system; the periods are also kept fixed at the values given by Queloz et al. Only the observations done in the time interval BJD 2,454,847 - 873 are used because they include many nights with multiple observations; otherwise it is not possible to separate the effects of the rotation fourth harmonic (5.91d = Prot/4) from the alias of the orbital period of CoRoT-7b (0.853585 d). The results of the various approaches are combined to give for the planet masses the values 8.0 \pm 1.2 MEarth for CoRoT-7b and 13.6 \pm 1.4 MEarth for CoRoT 7c. An estimation of the variation of the radial velocity of the star due to its activity is also given.The results obtained with 3 different approaches agree to give masses larger than those in previous determinations. From the existing internal structure models they indicate that CoRoT-7b is a much denser super-Earth. The bulk density is 11 \pm 3.5 g.cm-3 . CoRoT-7b may be rocky with a large iron core.Comment: 12 pages, 11 figure

    Super-resolution Using Adaptive Wiener Filters

    Get PDF
    The spatial sampling rate of an imaging system is determined by the spacing of the detectors in the focal plane array (FPA). The spatial frequencies present in the image on the focal plane are band-limited by the optics. This is due to diffraction through a finite aperture. To guarantee that there will be no aliasing during image acquisiton, the Nyquist criterion dictates that the sampling rate must be greater than twice the cut-off frequency of the optics. However, optical designs involve a number of trade-offs and typical imaging systems are designed with some level of aliasing. We will refer to such systems as detector limited, as opposed to optically limited. Furthermore, with or without aliasing, imaging systems invariably suffer from diffraction blur, optical abberations, and noise. Multiframe super-resolution (SR) processing has proven to be successful in reducing aliasing and enhancing the resolution of images from detector limited imaging systems
    • …
    corecore