1,237 research outputs found
Reviews on Technology and Standard of Spatial Audio Coding
Market demands on a more impressive entertainment media have motivated for delivery of three dimensional (3D) audio content to home consumers through Ultra High Definition TV (UHDTV), the next generation of TV broadcasting, where spatial audio coding plays fundamental role. This paper reviews fundamental concept on spatial audio coding which includes technology, standard, and application. Basic principle of object-based audio reproduction system will also be elaborated, compared to the traditional channel-based system, to provide good understanding on this popular interactive audio reproduction system which gives end users flexibility to render their own preferred audio composition.Keywords : spatial audio, audio coding, multi-channel audio signals, MPEG standard, object-based audi
Resynthesis of Acoustic Scenes Combining Sound Source Separation and WaveField Synthesis Techniques
[ES] La Separacón de Fuentes ha sido un tema de intensa investigación en muchas aplicaciones de tratamiento de señaal, cubriendo desde el procesado de voz al análisis de im'agenes biomédicas. Aplicando estas técnicas a los sistemas de reproducci'on espacial de audio, se puede solucionar una limitaci ón importante en la resíntesis de escenas sonoras 3D: la necesidad de disponer de las se ñales individuales correspondientes a cada fuente. El sistema Wave-field Synthesis (WFS) puede sintetizar un campo acústico mediante arrays de altavoces, posicionando varias fuentes en el espacio. Sin embargo, conseguir las señales de cada fuente de forma independiente es normalmente un problema. En este trabajo se propone la utilización de distintas técnicas de separaci'on de fuentes sonoras para obtener distintas pistas a partir de grabaciones mono o estéreo. Varios métodos de separación han sido implementados y comprobados, siendo uno de ellos desarrollado por el autor. Aunque los algoritmos existentes están lejos de conseguir una alta calidad, se han realizado tests subjetivos que demuestran cómo no es necesario obtener una separación óptima para conseguir resultados aceptables en la reproducción de escenas 3D[EN] Source Separation has been a subject of intense research in many signal processing applications, ranging
from speech processing to medical image analysis. Applied to spatial audio systems, it can be used to
overcome one fundamental limitation in 3D scene resynthesis: the need of having the independent
signals for each source available. Wave-field Synthesis is a spatial sound reproduction system that can
synthesize an acoustic field by means of loudspeaker arrays and it is also capable of positioning several
sources in space. However, the individual signals corresponding to these sources must be available and
this is often a difficult problem. In this work, we propose to use Sound Source Separation techniques
in order to obtain different tracks from stereo and mono mixtures. Some separation methods have
been implemented and tested, having been one of them developed by the author. Although existing
algorithms are far from getting hi-fi quality, subjective tests show how it is not necessary an optimum
separation for getting acceptable results in 3D scene reproductionCobos Serrano, M. (2007). Resynthesis of Acoustic Scenes Combining Sound Source Separation and WaveField Synthesis Techniques. http://hdl.handle.net/10251/12515Archivo delegad
Modification of multichannel audio for non-standard loudspeaker configurations
Tämä diplomityö käsittelee monikanavaäänen analyysi- ja hajotelmamenetelmiä. Työn tavoitteena on pystyä muokkaamaan monikanavaäänityksiä uusille kaiutinkokoonpanoille siten, että äänen tilaominaisuudet säilyvät. Teoriataustana työssä ovat ihmiskuulon tilahavainnointiominaisuudet, äänisignaaleihin perustuvat samankaltaisuusmitat sekä suunta-arviot ja informaatioteknologian lähde-erottelumenetelmät. Työ käy läpi kirjallisuudesta löytyviä monikanavaäänen muokkausmenetelmiä. Diplomityön kokeellisen osuuden aloittaa DVD-levyjen analyysi, jolla pyrittiin saamaan tietoa levyjen äänituotannossa käytettävistä menetelmistä myöhempää äänimuunnostekniikoiden kehittämistä varten. Koe osoitti, että kolmen etukanavasignaalin ja kahden takakanavasignaalin välillä on vain harvoin yhteisiä äänikomponentteja. Kompaktien kaiutinkokoonpanojen ominaisuuksia tutkittiin kahdessa kuuntelukokeessa. Ensimmäinen koe tarkasteli eroja eri kolmikanavaisten kaiutinasettelujen välillä. Tavoitteena näissä toistosysteemeissä oli hyödyntää ääniaaltojen heijastuksia huoneen seinistä. Jälkimmäinen kuuntelukoe sovelsi kolmea tunnettua äänimuunnosmenetelmää kolmikanavaiseen kompaktiin kaiutinkokoonpanoon, jonka toistosta saatavaa tilahavaintoa pyrittiin laajentamaan. Kahden metodeista havaittiin parantavan tutkittuja tilaominaisuuksia.In this thesis, analysis and decomposition methods for multichannel audio are studied. The objective of the work is to transform multichannel recordings to new reproduction systems so that the spatial properties of the sound are preserved. Spatial hearing of the human auditory system, signal-based similarity and localization measures, and information-technological source separation methods are described as background theory. Then, different multichannel audio transform methods are reviewed. The experimental part of the work starts with an analysis of DVD recordings to gain helpful information about the production methods of such recordings for further development of audio transform methods. The test reveals that the three frontal channels do not usually share common sound sources with the two rear channels. The properties of compact loudspeaker systems are investigated in two listening tests. The first test studies the differences between three-channel loudspeaker layouts, which exploit the reflections of sound waves from room boundaries. The latter one of the tests applies three transform methods known from the literature to widen the spatial dimensions of a three-channel compact loudspeaker system in comparison to a reference stereo system. These methods are a stereo signal transform method based on signal powers and interchannel cross-correlations, a primaryambient signal decomposition based on principal component analysis (PCA), and directional audio coding (DirAC). The methods were ranked in this descending order of preference by the test subjects
Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019
International audienc
Application of sound source separation methods to advanced spatial audio systems
This thesis is related to the field of Sound Source Separation (SSS). It addresses the development
and evaluation of these techniques for their application in the resynthesis of high-realism sound scenes by
means of Wave Field Synthesis (WFS). Because the vast majority of audio recordings are preserved in twochannel
stereo format, special up-converters are required to use advanced spatial audio reproduction formats,
such as WFS. This is due to the fact that WFS needs the original source signals to be available, in order to
accurately synthesize the acoustic field inside an extended listening area. Thus, an object-based mixing is
required.
Source separation problems in digital signal processing are those in which several signals have been mixed
together and the objective is to find out what the original signals were. Therefore, SSS algorithms can be applied
to existing two-channel mixtures to extract the different objects that compose the stereo scene. Unfortunately,
most stereo mixtures are underdetermined, i.e., there are more sound sources than audio channels. This
condition makes the SSS problem especially difficult and stronger assumptions have to be taken, often related to
the sparsity of the sources under some signal transformation.
This thesis is focused on the application of SSS techniques to the spatial sound reproduction field. As a result,
its contributions can be categorized within these two areas. First, two underdetermined SSS methods are
proposed to deal efficiently with the separation of stereo sound mixtures. These techniques are based on a
multi-level thresholding segmentation approach, which enables to perform a fast and unsupervised separation of
sound sources in the time-frequency domain. Although both techniques rely on the same clustering type, the
features considered by each of them are related to different localization cues that enable to perform separation
of either instantaneous or real mixtures.Additionally, two post-processing techniques aimed at
improving the isolation of the separated sources are proposed. The performance achieved by
several SSS methods in the resynthesis of WFS sound scenes is afterwards evaluated by means of
listening tests, paying special attention to the change observed in the perceived spatial attributes.
Although the estimated sources are distorted versions of the original ones, the masking effects
involved in their spatial remixing make artifacts less perceptible, which improves the overall
assessed quality. Finally, some novel developments related to the application of time-frequency
processing to source localization and enhanced sound reproduction are presented.Cobos Serrano, M. (2009). Application of sound source separation methods to advanced spatial audio systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8969Palanci
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Microphone Array Speech Enhancement Via Beamforming Based Deep Learning Network
In general, in-car speech enhancement is an application of the microphone array speech enhancement in particular acoustic environments. Speech enhancement inside the moving cars is always an interesting topic and the researchers work to create some modules to increase the quality of speech and intelligibility of speech in cars. The passenger dialogue inside the car, the sound of other equipment, and a wide range of interference effects are major challenges in the task of speech separation in-car environment. To overcome this issue, a novel Beamforming based Deep learning Network (Bf-DLN) has been proposed for speech enhancement. Initially, the captured microphone array signals are pre-processed using an Adaptive beamforming technique named Least Constrained Minimum Variance (LCMV). Consequently, the proposed method uses a time-frequency representation to transform the pre-processed data into an image. The smoothed pseudo-Wigner-Ville distribution (SPWVD) is used for converting time-domain speech inputs into images. Convolutional deep belief network (CDBN) is used to extract the most pertinent features from these transformed images. Enhanced Elephant Heard Algorithm (EEHA) is used for selecting the desired source by eliminating the interference source. The experimental result demonstrates the effectiveness of the proposed strategy in removing background noise from the original speech signal. The proposed strategy outperforms existing methods in terms of PESQ, STOI, SSNRI, and SNR. The PESQ of the proposed Bf-DLN has a maximum PESQ of 1.98, whereas existing models like Two-stage Bi-LSTM has 1.82, DNN-C has 1.75 and GCN has 1.68 respectively. The PESQ of the proposed method is 1.75%, 3.15%, and 4.22% better than the existing GCN, DNN-C, and Bi-LSTM techniques. The efficacy of the proposed method is then validated by experiments
Investigating the build-up of precedence effect using reflection masking
The auditory processing level involved in the build‐up of precedence [Freyman et al., J. Acoust. Soc. Am. 90, 874–884 (1991)] has been investigated here by employing reflection masked threshold (RMT) techniques. Given that RMT techniques are generally assumed to address lower levels of the auditory signal processing, such an approach represents a bottom‐up approach to the buildup of precedence. Three conditioner configurations measuring a possible buildup of reflection suppression were compared to the baseline RMT for four reflection delays ranging from 2.5–15 ms. No buildup of reflection suppression was observed for any of the conditioner configurations. Buildup of template (decrease in RMT for two of the conditioners), on the other hand, was found to be delay dependent. For five of six listeners, with reflection delay=2.5 and 15 ms, RMT decreased relative to the baseline. For 5‐ and 10‐ms delay, no change in threshold was observed. It is concluded that the low‐level auditory processing involved in RMT is not sufficient to realize a buildup of reflection suppression. This confirms suggestions that higher level processing is involved in PE buildup. The observed enhancement of reflection detection (RMT) may contribute to active suppression at higher processing levels
Synthesis of Spatially Extended Sources in Virtual Reality Audio
This thesis details a real-time implementation of spatial extent synthesis for virtual sound source objects made from mono sound signals and source object geometries. Techniques for distributing components of sound across basic and mesh-like geometry surfaces are discussed. A virtual-world audio environment supporting a listener avatar and various spatially extensive sound sources is described, and forms of source-to-listener distance attenuation are outlined with their roles in sound localization of spatially extensive sound sources. The implementation described herein takes form as an audio plug-in, of which the behavior, usage details, and compatible host applications are mentioned
- …