22 research outputs found

    Real-Time Perceptual Moving-Horizon Multiple-Description Audio Coding

    Get PDF
    A novel scheme for perceptual coding of audio for robust and real-time communication is designed and analyzed. As an alternative to PCM, DPCM, and more general noise-shaping converters, we propose to use psychoacoustically optimized noise-shaping quantizers based on the moving-horizon principle. In moving-horizon quantization, a few samples look-ahead is allowed at the encoder, which makes it possible to better shape the quantization noise and thereby reduce the resulting distortion over what is possible with conventional noise-shaping techniques. It is first shown that significant gains over linear PCM can be obtained without introducing a delay and without requiring postprocessing at the decoder, i.e., the encoded samples can be stored as, e.g., 16-bit linear PCM on CD-ROMs, and played out on standards-compliant CD players. We then show that multiple-description coding can be combined with moving-horizon quantization in order to combat possible erasures on the wireless link without introducing additional delays

    An investigation into the real-time manipulation and control of three-dimensional sound fields

    Get PDF
    This thesis describes a system that can be used for the decoding of a three dimensional audio recording over headphones or two, or more, speakers. A literature review of psychoacoustics and a review (both historical and current) of surround sound systems is carried out. The need for a system which is platform independent is discussed, and the proposal for a system based on an amalgamation of Ambisonics, binaural and transaural reproduction schemes is given. In order for this system to function optimally, each of the three systems rely on providing the listener with the relevant psychoacoustic cues. The conversion from a five speaker ITU array to binaural decode is well documented but pair-wise panning algorithms will not produce the correct lateralisation parameters at the ears of a centrally seated listener. Although Ambisonics has been well researched, no one has, as yet, produced a psychoacoustically optimised decoder for the standard irregular five speaker array as specified by the ITU as the original theory, as proposed by Gerzon and Barton (1992) was produced (known as a Vienna decoder), and example solutions given, before the standard had been decided on. In this work, the original work by Gerzon and Barton (1992) is analysed, and shown to be suboptimal, showing a high/low frequency decoder mismatch due to the method of solving the set of non-linear simultaneous equations. A method, based on the Tabu search algorithm, is applied to the Vienna decoder problem and is shown to provide superior results to those shown by Gerzon and Barton (1992) and is capable of producing multiple solutions to the Vienna decoder problem. During the write up of this report Craven (2003) has shown how 4th order circular harmonics (as used in Ambisonics) can be used to create a frequency independent panning law for the five speaker ITU array, and this report also shows how the Tabu search algorithm can be used to optimise these decoders further. A new method is then demonstrated using the Tabu search algorithm coupled with lateralisation parameters extracted from a binaural simulation of the Ambisonic system to be optimised (as these are the parameters that the Vienna system is approximating). This method can then be altered to take into account head rotations directly which have been shown as an important psychoacoustic parameter in the localisation of a sound source (Spikofski et al., 2001) and is also shown to be useful in differentiating between decoders optimised using the Tabu search form of the Vienna optimisations as no objective measure had been suggested. Optimisations for both Binaural and Transaural reproductions are then discussed so as to maximise the performance of generic HRTF data (i.e. not individualised) using inverse filtering methods, and a technique is shown that minimises the amount of frequency dependant regularisation needed when calculating cross-talk cancellation filters.EPRS

    Progressive Perceptual Audio Rendering of Complex Scenes

    Get PDF
    International audienceDespite recent advances, including sound source clustering and perceptual auditory masking, high quality rendering of complex virtual scenes with thousands of sound sources remains a challenge. Two major bottlenecks appear as the scene complexity increases: the cost of clustering itself, and the cost of pre-mixing source signals within each cluster. In this paper, we first propose an improved hierarchical clustering algorithm that remains efficient for large numbers of sources and clusters while providing progressive refinement capabilities. We then present a lossy pre-mixing method based on a progressive representation of the input audio signals and the perceptual importance of each sound source. Our quality evaluation user tests indicate that the recently introduced audio saliency map is inappropriate for this task. Consequently we propose a "pinnacle", loudness-based metric, which gives the best results for a variety of target computing budgets. We also performed a perceptual pilot study which indicates that in audio-visual environments, it is better to allocate more clusters to visible sound sources. We propose a new clustering metric using this result. As a result of these three solutions, our system can provide high quality rendering of thousands of 3D-sound sources on a "gamer-style" PC

    Lossy Distortion as a Musical Effect

    Get PDF
    73 pagesLossy audio compression is a digital process that uses models of human hearing to remove parts of the sound deemed less important, in order to compress audio to much smaller file sizes. The MP3 encoding process, one of the most famous lossy audio compression formats, can impart audio with a distinctive watery, muffled sound at higher levels of compression. This sound, which I call “lossy distortion,” can be used as a musical effect to inspire nostalgia for early digital audio, or for a more abstract, ethereal sound. In analyzing creative uses of lossy distortion and existing plugins for lossy distortion, I identify some desirable features that are lacking from existing plugins. To fill these gaps, I built two lossy distortion plugins. One, called Empy, gives the user control over a wide variety of lossy distortion sounds. The other, Fish, emulates a particular sound of lossy distortion that other plugins struggle to achieve, by modifying a popular piece of MP3 encoding software. In their sound and user interface, these plugins explore new ground in the rapidly developing field of lossy distortion plugins

    Object coding of music using expressive MIDI

    Get PDF
    PhDStructured audio uses a high level representation of a signal to produce audio output. When it was first introduced in 1998, creating a structured audio representation from an audio signal was beyond the state-of-the-art. Inspired by object coding and structured audio, we present a system to reproduce audio using Expressive MIDI, high-level parameters being used to represent pitch expression from an audio signal. This allows a low bit-rate MIDI sketch of the original audio to be produced. We examine optimisation techniques which may be suitable for inferring Expressive MIDI parameters from estimated pitch trajectories, considering the effect of data codings on the difficulty of optimisation. We look at some less common Gray codes and examine their effect on algorithm performance on standard test problems. We build an expressive MIDI system, estimating parameters from audio and synthesising output from those parameters. When the parameter estimation succeeds, we find that the system produces note pitch trajectories which match source audio to within 10 pitch cents. We consider the quality of the system in terms of both parameter estimation and the final output, finding that improvements to core components { audio segmentation and pitch estimation, both active research fields { would produce a better system. We examine the current state-of-the-art in pitch estimation, and find that some estimators produce high precision estimates but are prone to harmonic errors, whilst other estimators produce fewer harmonic errors but are less precise. Inspired by this, we produce a novel pitch estimator combining the output of existing estimators

    Automatic annotation of musical audio for interactive applications

    Get PDF
    PhDAs machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music Audio Contents); EPSRC grants GR/R54620; GR/S75802/01

    ECG compression for Holter monitoring

    Get PDF
    Cardiologists can gain useful insight into a patient's condition when they are able to correlate the patent's symptoms and activities. For this purpose, a Holter Monitor is often used - a portable electrocardiogram (ECG) recorder worn by the patient for a period of 24-72 hours. Preferably, the monitor is not cumbersome to the patient and thus it should be designed to be as small and light as possible; however, the storage requirements for such a long signal are very large and can significantly increase the recorder's size and cost, and so signal compression is often employed. At the same time, the decompressed signal must contain enough detail for the cardiologist to be able to identify irregularities. "Lossy" compressors may obscure such details, where a "lossless" compressor preserves the signal exactly as captured.The purpose of this thesis is to develop a platform upon which a Holter Monitor can be built, including a hardware-assisted lossless compression method in order to avoid the signal quality penalties of a lossy algorithm. The objective of this thesis is to develop and implement a low-complexity lossless ECG encoding algorithm capable of at least a 2:1 compression ratio in an embedded system for use in a Holter Monitor. Different lossless compression techniques were evaluated in terms of coding efficiency as well as suitability for ECG waveform application, random access within the signal and complexity of the decoding operation. For the reduction of the physical circuit size, a System On a Programmable Chip (SOPC) design was utilized. A coder based on a library of linear predictors and Rice coding was chosen and found to give a compression ratio of at least 2:1 and as high as 3:1 on real-world signals tested while having a low decoder complexity and fast random access to arbitrary parts of the signal. In the hardware-assisted implementation, the speed of encoding was a factor of between four and five faster than a software encoder running on the same CPU while allowing the CPU to perform other tasks during the encoding process

    Audio Programming Interfaces in Real-time Context

    Get PDF
    Tässä työssä vertailtiin kolmea yleisesti saatavilla olevaa audioohjelmointirajapintaa, ALSAa, Core Audiota ja WASAPIa. Suorituskyvyn vertailemiseksi toteutettiin Karplus-Strong-kielimalliin perustuva reaaliaikainen testiohjelma kaikilla kolmella eri ohjelmointirajapinnalla. Mallin aaltotaulukko korvattiin järjestelmän tuntemattomalla viiveellä. Testeissä lyhyt valkoisen kohinan purske lähetettiin äänikortin ulostuloon, joka oli kytketty saman laitteen sisäänmenoon lyhyellä kaapelilla. Testiohjelma luki signaalia laitteen sisäänmenosta, tallensi sen analyysiä varten, mutta lisäksi lähetti sen alipäästösuodattimen läpi takaisin ulostuloon, muodostaen silmukan. Kohinapurske käyttäytyy tällaisessa silmukassa kuin kielisoittimen kieli, kun se on ensin alkutilastaan poikkeutettu. Kun testiohjelmaa suoritettiin reaaliaikaisesti, järjestelmän viiveet eli latenssit näkyivät kielimallissa aaltotaulukon pituuksina, joita vertailtiin. Testiohjelmia ajettiin samalla Apple-laitteistolla, ajot tallennettiin ja latenssit määritettiin tallenteista. Virtualisointiohjelmia ei käytetty, vaan käyttöjärjestelmiä ohjelmistoineen ajettiin laitteessa sellaisenaan. Mittauspisteet valittiin siten, että samoja voitiin käyttää kaikkien toteutusten mittauksissa. Toteutusten keskinäistä vertailua varten puskurikoon ja näytteenottotaajuuden vaikutukset vähennettiin tuloksista, jolloin jäljelle jääneiden latenssien havaittiin olevan muutaman millisekunnin sisällä toisistaan. Pienimmät latenssit mitattiin ALSAa käyttäneellä toteutuksella 96 kHz:n näytteenottotaajuudella. ALSAlla saavutettiin yleisesti paras suorituskyky, ja WASAPIllakin lähes yhtä hyvä. Suurimmat latenssiarvot mitattiin Core Audio -rajapintaa käyttäneellä toteutuksella. Lisäksi rajapintoja vertailtiin suhteessa yleisiin suunnitteluperiaatteisiin. Vertailussa huomioitiin metodien määrä ja arvioitiin käyttöönoton helppoutta ja dokumentaation saatavuutta. Suuri opeteltavien metodien määrä hidastaa rajapinnan käyttöönottoa. Rajapintojen metodit laskettiin, jolloin ALSAssa havaittiin olevan muihin rajapintoihin verrattuna huomattavasti suurempi määrä ohjelmoijalle näkyviä metodeja.In this thesis, three popular, generally available audio programming interfaces, ALSA, Core Audio, and WASAPI, were compared. A modified real-time Karplus-Strong plucked-string model application was implemented using all three APIs. In order to compare the performances, the wavetable of the plucked-string model was effectively replaced by the unknown delay of the system. In the tests, a short burst of white noise was written to the physical audio output device of the sound card, which was hardwired with a short cable to the input port of the same device. The input stream was then acquired by the application, stored on an additional buffer for further analysis, but also sent back to the output device, in order to create a loop. The noise burst in the loop acts similarly to a string instrument after its initial excitation. As the model is run in real-time, the latency of the whole system appears as the length of the wavetable. These latencies were compared. In order to guarantee a fair comparison, the applications and corresponding operating systems were installed and run natively on the same Apple hardware, without additional virtualization layers. The runs were recorded and latencies were determined by analyzing the recordings. By compensating the known effect of buffer size and sample rate, the overhead latency characteristic of each implementation was extracted from the results. Overhead latencies were found to be within a few milliseconds. The smallest overhead latencies were measured from the ALSA implementation at 96 kHz. Overall, ALSA gave the best performance, and WASAPI was nearly as good. The largest overhead latencies were measured from the Core Audio implementation both at 44.1 kHz and 48 kHz sample rates. Additionally, the APIs were compared in terms of major existing API design recommendations. The steepness of the learning curve of an API can be estimated by counting the number of methods the programmer is exposed to. Compared with the other two, ALSA was found to expose a significantly larger number of methods

    Movements in Binaural Space: Issues in HRTF Interpolation and Reverberation, with applications to Computer Music

    Get PDF
    This thesis deals broadly with the topic of Binaural Audio. After reviewing the literature, a reappraisal of the minimum-phase plus linear delay model for HRTF representation and interpolation is offered. A rigorous analysis of threshold based phase unwrapping is also performed. The results and conclusions drawn from these analyses motivate the development of two novel methods for HRTF representation and interpolation. Empirical data is used directly in a Phase Truncation method. A Functional Model for phase is used in the second method based on the psychoacoustical nature of Interaural Time Differences. Both methods are validated; most significantly, both perform better than a minimum-phase method in subjective testing. The accurate, artefact-free dynamic source processing afforded by the above methods is harnessed in a binaural reverberation model, based on an early reflection image model and Feedback Delay Network diffuse field, with accurate interaural coherence. In turn, these flexible environmental processing algorithms are used in the development of a multi-channel binaural application, which allows the audition of multi-channel setups in headphones. Both source and listener are dynamic in this paradigm. A GUI is offered for intuitive use of the application. HRTF processing is thus re-evaluated and updated after a review of accepted practice. Novel solutions are presented and validated. Binaural reverberation is recognised as a crucial tool for convincing artificial spatialisation, and is developed on similar principles. Emphasis is placed on transparency of development practices, with the aim of wider dissemination and uptake of binaural technology
    corecore