12 research outputs found

    Design of a Compact Cylindrical Loudspeaker Array for Spatial Sound Reproduction

    Get PDF
    Building acoustic beamformers is a problem whose solution is hindered by the wide-band nature of audible sound. In order to achieve a consistent directional response over a wide range of frequencies, a conventional acoustic beamformer needs a high number of discrete loudspeakers and be large enough to achieve a desired low-frequency performance. The acoustic beamformer design described in this paper uses measurement-based optimized beamforming for loudspeakers mounted on a rigid cylindrical baffle. Super-directional beamforming enables achieving desired directivity with multiple loudspeakers at low frequencies. High frequencies are reproduced with a single loudspeaker, whose highly directional reproduction---due to the cylindrical baffle---matches the design goals. In addition to the beamformer filter design procedure, it is shown how such loudspeaker array can be used for spatial sound reproduction

    Designing Practical Filters For Sound Field Reconstruction

    Get PDF
    Multichannel sound field reproduction techniques, such as Wave Field Synthesis (WFS) and Sound Field Reconstruction (SFR), define loudspeaker filters in the frequency domain. However, in order to use these techniques in practical systems, one needs to convert these frequency-domain characteristics to practical and efficient time-domain digital filters. Additional limitation of SFR comes from the fact that it uses a numerical matrix pseudoinversion procedure, where the obtained filters are sensitive to numerical errors when the system matrix has a high condition number. This paper describes physically-motivated modifications of the SFR approach that allow for mitigating conditioning problems and frequency-domain loudspeaker filter smoothing that allows for designing short time-domain filters while maintaining high sound field reproduction accuracy. It also provides comparisons of sound field reproduction accuracy of WFS and SFR using the obtained discrete-time filters

    Reproducing Sound Fields Using MIMO Acoustic Channel Inversion

    Get PDF
    Sound fields are essentially band-limited phenomena, both temporally and spatially. This implies that a spatially sampled sound field respecting the Nyquist criterion is effectively equivalent to its continuous original. We describe Sound Field Reconstruction (SFR)---a technique that uses the previously stated observation to express the reproduction of a continuous sound field as an inversion of the discrete acoustic channel from a loudspeaker array to a grid of control points. The acoustic channel is inverted using truncated singular value decomposition (SVD) in order to provide optimal sound field reproduction subject to a limited effort constraint. Additionally, a detailed procedure for obtaining loudspeaker driving signals that involves selection of active loudspeakers, coverage of the listening area with control points, and frequency domain FIR filter design is described. Extensive simulations comparing SFR with Wave Field Synthesis show that on average, SFR provides higher sound field reproduction accuracy

    Sound Field Reconstruction: An Improved Approach For Wave Field Synthesis

    Get PDF
    Wave field synthesis (WFS) is a prevalent approach to multiple-loudspeaker sound reproduction for an extended listening area. Although powerful as a theoretical concept, its deployment is hampered by practical limitations due to diffraction, aliasing, and the effects of the listening room. Reconstructing the desired sound field in the listening area, accounting for the medium propagation characteristic, is another approach termed as sound field reconstruction (SFR). It is based on the essential band-limitedness of the sound field, which enables a continuous matching of the reconstructed and the desired sound field by their matching on a discrete set of points spaced below the Nyquist distance. We compare the two approaches in a common single-source free-field setup, and show that SFR provides improved sound field reproduction compared to WFS in a wide listening area around a defined reference line

    Multi-modal probabilistic indoor localization on a smartphone

    Get PDF
    The satellite-based Global Positioning System (GPS) provides robust localization on smartphones outdoors. In indoor environments, however, no system is close to achieving a similar level of ubiquity, with existing solutions offering different trade-offs in terms of accuracy, robustness and cost. In this paper, we develop a multi-modal positioning system, targeted at smartphones, which aims to get the best out of each of its constituent modalities. More precisely, we combine Bluetooth low energy (BLE) beacons, round-trip-time (RTT) enabled WiFi access points and the smartphone’s inertial measurement unit (IMU) to provide a cheap robust localization system that, unlike fingerprinting methods, requires no pre-training. To do this, we use a probabilistic algorithm based on a conditional random field (CRF). We show how to incorporate sparse visual information to improve the accuracy of our system, using pose estimation from pre-scanned visual landmarks, to calibrate the system online. Our method achieves an accuracy of around 2 meters on two realistic datasets, outperforming other distance-based localization approaches. We also compare our approach with an ultra-wideband (UWB) system. While we do not match the performance of UWB, our system is cheap, smartphone compatible and provides satisfactory performance for many applications

    Spatial Acoustic Signal Processing

    No full text
    A sound field on a line or in a plane has an effectively limited spatial bandwidth determined by the temporal frequency. Similar can be said for sound fields from far-field sources when analyzed on circular and spherical apertures. Namely, for a given frequency and aperture size, a sound field is effectively composed of a finite number of circular or spherical harmonic components. Based on these two observations, it follows that if adequately sampled, sound fields can be represented and manipulated in a digital domain with negligible loss of information. The optimal sampling surface depends on the problem geometry, and the set of sampling points needs to be in accordance with the Nyquist criterion relative to the mentioned effective sound field bandwidth. In this thesis, we address the problems of sound field capture and reproduction from a practical perspective. More specifically, we present approaches that do not depend on acoustical models, but rely instead on obtaining an acoustic MIMO channel between transducers (microphones or loudspeakers) and a set of sampling (or control) points. Subsequently, sound field capture and reproduction are formulated as constrained optimization problems in a spatially discrete domain and solved using conventional numerical optimization tools. The first part of the thesis deals with spatial sound capture. We present a framework for analyzing and designing differential microphone arrays based on spatiotemporal sound field gradients. We also show how to record two- and three-dimensional sound fields with differential, circular, and spherical microphone arrays. Finally, we use the mentioned discrete optimization for computing filters for directional and sound field microphone arrays. In the second part of the thesis, we focus on spatial sound reproduction. We first present a design of a baffled loudspeaker array for reproducing sound with high directivity over a wide frequency range, which combines beamforming at low, and scattering from a rigid baffle at high frequencies. We next present Sound Field Reconstruction (SFR), which is an approach for optimally reproducing a desired sound field in a wide listening area by inverting a discrete, MIMO acoustic channel. In the end, we propose a single- and multi-channel low-frequency room equalization method, formulated as a discrete constrained optimization problem, with constraints designed to prevent excessive equalization filter gains, localization bias, and temporal distortions in the form of pre- and post-echos

    Advanced B-Format Analysis

    No full text
    Spatial sound rendering methods that use B-format have moved from static to signal-dependent, making B-format signal analysis a crucial part of B-format decoders. In the established B-format signal analysis methods, the acquired sound field is commonly modeled in terms of a single plane wave and diffuse sound, or in terms of two plane waves. We present a B-format analysis method that models the sound field with two direct sounds and diffuse sound, and computes the three components' powers and direct sound directions as a function of time and frequency. We show the effectiveness of the proposed method with experiments using artificial and realistic signals

    Spatiotemporal Gradient Analysis of Differential Microphone Arrays

    Get PDF
    The literature on gradient and differential microphone arrays makes a distinction between the two types, but it nevertheless shows how both types can be used to obtain the same directional responses. A more theoretically sound rationale for using delays in differential microphone arrays has not yet been given. A gradient analysis of the sound field viewed as a spatiotemporal phenomenon is presented, giving a theoretical interpretation of the working principles of gradient and differential microphone arrays. It is shown that both types of microphone arrays can be viewed as devices for approximately measuring spatiotemporal derivatives of the sound field. Furthermore the design of high-order differential microphone arrays using the aforementioned spatiotemporal gradient analysis is discussed

    Sound Field Recording by Measuring Gradients

    Get PDF
    Gradient based microphone arrays, the horizontal sound field's plane wave decomposition, and the corresponding circular harmonics decomposition are reviewed. Further, a general relation between directivity patterns of the horizontal sound field gradients and the circular harmonics of any order is derived. Based on this relation, a number of example differential microphone arrays are analyzed, including arrays capable of approximating the sound pressure gradients necessary for obtaining the circular harmonics up to order three

    MULTI-CHANNEL LOW-FREQUENCY ROOM EQUALIZATION USING PERCEPTUALLY MOTIVATED CONSTRAINED OPTIMIZATION

    No full text
    We consider the problem of multiple-loudspeaker low-frequency room equalization for a wide listening area, where the equalized loudspeaker is helped using the remaining ones. Using a spatial discretization of the listening area, we formulate the problem as a multipoint error minimization between desired and synthesized magnitude frequency responses. The desired response and cost function are formulated with a goal of capturing the room’s spectral power profile, and penalizing strong resonances. Considering physical and psychoacoustical observations, we argue for the use of gain-limited, short, and well-localized equalization filters, with an additional delay for loudspeakers that help the equalized one. We propose a convex optimization framework for computing room equalization filters, where the mentioned filter requirements are incorporated as convex constraints. We verify the effectiveness of our equalization approach through simulations
    corecore