8 research outputs found

    Spatial Multizone Soundfield Reproduction Design

    No full text
    It is desirable for people sharing a physical space to access different multimedia information streams simultaneously. For a good user experience, the interference of the different streams should be held to a minimum. This is straightforward for the video component but currently difficult for the audio sound component. Spatial multizone soundfield reproduction, which aims to provide an individual sound environment to each of a set of listeners without the use of physical isolation or headphones, has drawn significant attention of researchers in recent years. The realization of multizone soundfield reproduction is a conceptually challenging problem as currently most of the soundfield reproduction techniques concentrate on a single zone. This thesis considers the theory and design of a multizone soundfield reproduction system using arrays of loudspeakers in given complex environments. We first introduce a novel method for spatial multizone soundfield reproduction based on describing the desired multizone soundfield as an orthogonal expansion of formulated basis functions over the desired reproduction region. This provides the theoretical basis of both 2-D (height invariant) and 3-D soundfield reproduction for this work. We then extend the reproduction of the multizone soundfield over the desired region to reverberant environments, which is based on the identification of the acoustic transfer function (ATF) from the loudspeaker over the desired reproduction region using sparse methods. The simulation results confirm that the method leads to a significantly reduced number of required microphones for an accurate multizone sound reproduction compared with the state of the art, while it also facilitates the reproduction over a wide frequency range. In addition, we focus on the improvements of the proposed multizone reproduction system with regard to practical implementation. The so-called 2.5D multizone oundfield reproduction is considered to accurately reproduce the desired multizone soundfield over a selected 2-D plane at the height approximately level with the listenerā€™s ears using a single array of loudspeakers with 3-D reverberant settings. Then, we propose an adaptive reverberation cancelation method for the multizone soundfield reproduction within the desired region and simplify the prior soundfield measurement process. Simulation results suggest that the proposed method provides a faster convergence rate than the comparative approaches under the same hardware provision. Finally, we conduct the real-world implementation based on the proposed theoretical work. The experimental results show that we can achieve a very noticeable acoustic energy contrast between the signals recorded in the bright zone and the quiet zone, especially for the system implementation with reverberation equalization

    An approach to generating two zones of silence with application to personal sound systems

    No full text
    An application of current interest in sound reproduction systems is the creation of multizone sound fields which produce multiple independent sound fields for multiple listeners. The challenge in producing such sound fields is the avoidance of interference between sound zones, which is dependent on the geometry of the zone and the direction of arrival of the desired sound fields. This paper provides a theoretical basis for the generation of two zones based on the creation of sound fields with nulls and the positioning of those nulls at arbitrary positions. The nulls are created by suppressing low-order mode terms in the sound field expansion. Simulations are presented for the two-dimensional case which shows that suppression of interference is possible across a broad frequency audio range

    Analysis and control of multi-zone sound field reproduction using modal-domain approach

    Get PDF
    Multi-zone sound control aims to reproduce multiple sound fields independently and simultaneously over different spatial regions within the same space. This paper investigates the multi-zone sound control problem formulated in the modal domain using the Lagrange cost function and provides a modal-domain analysis of the problem. The Lagrange cost function is formulated to represent a quadratic objective of reproducing a desired sound field within the bright zone and with constraints on sound energy in the dark zone and global region. A fundamental problem in multi-zone reproduction is interzone sound interference, where based on the geometry of the sound zones and the desired sound field within the bright zone the achievable reproduction performance is limited. The modal-domain Lagrangian solution demonstrates the intrinsic ill-posedness of the problem, based on which a parameter, the coefficient of realisability, is developed to evaluate the reproduction limitation. The proposed reproduction method is based on controlling the interference between sound zones and sound leakage outside the sound zones, resulting in a suitable compromise between good bright zone performance and satisfactory dark zone performance. The performance of the proposed design is demonstrated through numerical simulations of two-zone reproduction in free-field and in reverberant environments

    A Measure Based on Beamforming Power for Evaluation of Sound Field Reproduction Performance

    Get PDF
    This paper proposes a measure to evaluate sound field reproduction systems with an array of loudspeakers. The spatially-averaged squared error of the sound pressure between the desired and the reproduced field, namely the spatial error, has been widely used, which has considerable problems in two conditions. First, in non-anechoic conditions, room reflections substantially deteriorate the spatial error, although these room reflections affect human localization to a lesser degree. Second, for 2.5-dimensional reproduction of spherical waves, the spatial error increases consistently due to the difference in the amplitude decay rate, whereas the degradation of human localization performance is limited. The measure proposed in this study is based on the beamforming powers of the desired and the reproduced fields. Simulation and experimental results show that the proposed measure is less sensitive to room reflections and the amplitude decay than the spatial error, which is likely to agree better with the human perception of source localization

    Acoustic Room Compensation Using Local PCA-based Room Average Power Response Estimation

    Full text link
    Acoustic room compensation techniques, which allow a sound reproduction system to counteract undesired alteration to the sound scene due to excessive room resonances, have been widely studied. Extensive efforts have been reported to enlarge the region over which room equalization is effective and to contrast variations of room transfer functions in space. A speaker-tuning technology "Trueplay" allows users to compensate for undesired room effects over an extended listening area based on a spatially averaged power response of the room, which is conventionally measured using microphones on portable devices when users move around the room. In this work, we propose a novel system that leverages measured speaker echo path self-responses to predict the room average power responses using a local PCA based approach. Experimental results confirm the effectiveness of the proposed estimation method, which further leads to a room compensation filter design that achieves a good sound similarity compared to the reference system with the ground-truth room average power response while outperforming other systems that do not leverage the proposed estimator.Comment: 5 pages, 7 figures, to appear in IWAENC 202

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Array signal processing algorithms for localization and equalization in complex acoustic channels

    No full text
    The reproduction of realistic soundscapes in consumer electronic applications has been a driving force behind the development of spatial audio signal processing techniques. In order to accurately reproduce or decompose a particular spatial sound field, being able to exploit or estimate the effects of the acoustic environment becomes essential. This requires both an understanding of the source of the complexity in the acoustic channel (the acoustic path between a source and a receiver) and the ability to characterize its spatial attributes. In this thesis, we explore how to exploit or overcome the effects of the acoustic channel for sound source localization and sound field reproduction. The behaviour of a typical acoustic channel can be visualized as a transformation of its free field behaviour, due to scattering and reflections off the measurement apparatus and the surfaces in a room. These spatial effects can be modelled using the solutions to the acoustic wave equation, yet the physical nature of these scatterers typically results in complex behaviour with frequency. The first half of this thesis explores how to exploit this diversity in the frequency-domain for sound source localization, a concept that has not been considered previously. We first extract down-converted subband signals from the broadband audio signal, and collate these signals, such that the spatial diversity is retained. A signal model is then developed to exploit the channel's spatial information using a signal subspace approach. We show that this concept can be applied to multi-sensor arrays on complex-shaped rigid bodies as well as the special case of binaural localization. In both c! ases, an improvement in the closely spaced source resolution is demonstrated over traditional techniques, through simulations and experiments using a KEMAR manikin. The binaural analysis further indicates that the human localization performance in certain spatial regions is limited by the lack of spatial diversity, as suggested in perceptual experiments in the literature. Finally, the possibility of exploiting known inter-subband correlated sources (e.g., speech) for localization in under-determined systems is demonstrated. The second half of this thesis considers reverberation control, where reverberation is modelled as a superposition of sound fields created by a number of spatially distributed sources. We consider the mode/wave-domain description of the sound field, and propose modelling the reverberant modes as linear transformations of the desired sound field modes. This is a novel concept, as we consider each mode transformation to be independent of other modes. This model is then extended to sound field control, and used to derive the compensation signals required at the loudspeakers to equalize the reverberation. We show that estimating the reverberant channel and controlling the sound field now becomes a single adaptive filtering problem in the mode-domain, where the modes can be adapted independently. The performance of the proposed method is compared with existing adaptive and non-adaptive sound field control techniques through simulations. Finally, it is shown that an order of magnitude reduction in the computational complexity can be achieved, while maintaining comparable performance to existing adaptive control techniques
    corecore