876 research outputs found

    The Synchronized Short-Time-Fourier-Transform: Properties and Definitions for Multichannel Source Separation.

    Get PDF
    This paper proposes the use of a synchronized linear transform, the synchronized short-time-Fourier-transform (sSTFT), for time-frequency analysis of anechoic mixtures. We address the short comings of the commonly used time-frequency linear transform in multichannel settings, namely the classical short-time-Fourier-transform (cSTFT). We propose a series of desirable properties for the linear transform used in a multichannel source separation scenario: stationary invertibility, relative delay, relative attenuation, and finally delay invariant relative windowed-disjoint orthogonality (DIRWDO). Multisensor source separation techniques which operate in the time-frequency domain, have an inherent error unless consideration is given to the multichannel properties proposed in this paper. The sSTFT preserves these relationships for multichannel data. The crucial innovation of the sSTFT is to locally synchronize the analysis to the observations as opposed to a global clock. Improvement in separation performance can be achieved because assumed properties of the time-frequency transform are satisfied when it is appropriately synchronized. Numerical experiments show the sSTFT improves instantaneous subsample relative parameter estimation in low noise conditions and achieves good synthesis

    An Efficient Polyphase Filter Based Resampling Method for Unifying the PRFs in SAR Data

    Full text link
    Variable and higher pulse repetition frequencies (PRFs) are increasingly being used to meet the stricter requirements and complexities of current airborne and spaceborne synthetic aperture radar (SAR) systems associated with higher resolution and wider area products. POLYPHASE, the proposed resampling scheme, downsamples and unifies variable PRFs within a single look complex (SLC) SAR acquisition and across a repeat pass sequence of acquisitions down to an effective lower PRF. A sparsity condition of the received SAR data ensures that the uniformly resampled data approximates the spectral properties of a decimated densely sampled version of the received SAR data. While experiments conducted with both synthetically generated and real airborne SAR data show that POLYPHASE retains comparable performance to the state-of-the-art BLUI scheme in image quality, a polyphase filter-based implementation of POLYPHASE offers significant computational savings for arbitrary (not necessarily periodic) input PRF variations, thus allowing fully on-board, in-place, and real-time implementation

    Proceedings of the Linux Audio Conference 2018

    Get PDF
    These proceedings contain all papers presented at the Linux Audio Conference 2018. The conference took place at c-base, Berlin, from June 7th - 10th, 2018 and was organized in cooperation with the Electronic Music Studio at TU Berlin

    PERFORMANCE IMPROVEMENT OF MULTICHANNEL AUDIO BY GRAPHICS PROCESSING UNITS

    Full text link
    Multichannel acoustic signal processing has undergone major development in recent years due to the increased complexity of current audio processing applications. People want to collaborate through communication with the feeling of being together and sharing the same environment, what is considered as Immersive Audio Schemes. In this phenomenon, several acoustic e ects are involved: 3D spatial sound, room compensation, crosstalk cancelation, sound source localization, among others. However, high computing capacity is required to achieve any of these e ects in a real large-scale system, what represents a considerable limitation for real-time applications. The increase of the computational capacity has been historically linked to the number of transistors in a chip. However, nowadays the improvements in the computational capacity are mainly given by increasing the number of processing units, i.e expanding parallelism in computing. This is the case of the Graphics Processing Units (GPUs), that own now thousands of computing cores. GPUs were traditionally related to graphic or image applications, but new releases in the GPU programming environments, CUDA or OpenCL, allowed that most applications were computationally accelerated in elds beyond graphics. This thesis aims to demonstrate that GPUs are totally valid tools to carry out audio applications that require high computational resources. To this end, di erent applications in the eld of audio processing are studied and performed using GPUs. This manuscript also analyzes and solves possible limitations in each GPU-based implementation both from the acoustic point of view as from the computational point of view. In this document, we have addressed the following problems: Most of audio applications are based on massive ltering. Thus, the rst implementation to undertake is a fundamental operation in the audio processing: the convolution. It has been rst developed as a computational kernel and afterwards used for an application that combines multiples convolutions concurrently: generalized crosstalk cancellation and equalization. The proposed implementation can successfully manage two di erent and common situations: size of bu ers that are much larger than the size of the lters and size of bu ers that are much smaller than the size of the lters. Two spatial audio applications that use the GPU as a co-processor have been developed from the massive multichannel ltering. First application deals with binaural audio. Its main feature is that this application is able to synthesize sound sources in spatial positions that are not included in the database of HRTF and to generate smoothly movements of sound sources. Both features were designed after di erent tests (objective and subjective). The performance regarding number of sound source that could be rendered in real time was assessed on GPUs with di erent GPU architectures. A similar performance is measured in a Wave Field Synthesis system (second spatial audio application) that is composed of 96 loudspeakers. The proposed GPU-based implementation is able to reduce the room e ects during the sound source rendering. A well-known approach for sound source localization in noisy and reverberant environments is also addressed on a multi-GPU system. This is the case of the Steered Response Power with Phase Transform (SRPPHAT) algorithm. Since localization accuracy can be improved by using high-resolution spatial grids and a high number of microphones, accurate acoustic localization systems require high computational power. The solutions implemented in this thesis are evaluated both from localization and from computational performance points of view, taking into account different acoustic environments, and always from a real-time implementation perspective. Finally, This manuscript addresses also massive multichannel ltering when the lters present an In nite Impulse Response (IIR). Two cases are analyzed in this manuscript: 1) IIR lters composed of multiple secondorder sections, and 2) IIR lters that presents an allpass response. Both cases are used to develop and accelerate two di erent applications: 1) to execute multiple Equalizations in a WFS system, and 2) to reduce the dynamic range in an audio signal.Belloch Rodríguez, JA. (2014). PERFORMANCE IMPROVEMENT OF MULTICHANNEL AUDIO BY GRAPHICS PROCESSING UNITS [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/40651TESISPremios Extraordinarios de tesis doctorale

    Wavelets and Subband Coding

    Get PDF
    First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book
    corecore