590 research outputs found

    Algorithms and architectures for the multirate additive synthesis of musical tones

    Get PDF
    In classical Additive Synthesis (AS), the output signal is the sum of a large number of independently controllable sinusoidal partials. The advantages of AS for music synthesis are well known as is the high computational cost. This thesis is concerned with the computational optimisation of AS by multirate DSP techniques. In note-based music synthesis, the expected bounds of the frequency trajectory of each partial in a finite lifecycle tone determine critical time-invariant partial-specific sample rates which are lower than the conventional rate (in excess of 40kHz) resulting in computational savings. Scheduling and interpolation (to suppress quantisation noise) for many sample rates is required, leading to the concept of Multirate Additive Synthesis (MAS) where these overheads are minimised by synthesis filterbanks which quantise the set of available sample rates. Alternative AS optimisations are also appraised. It is shown that a hierarchical interpretation of the QMF filterbank preserves AS generality and permits efficient context-specific adaptation of computation to required note dynamics. Practical QMF implementation and the modifications necessary for MAS are discussed. QMF transition widths can be logically excluded from the MAS paradigm, at a cost. Therefore a novel filterbank is evaluated where transition widths are physically excluded. Benchmarking of a hypothetical orchestral synthesis application provides a tentative quantitative analysis of the performance improvement of MAS over AS. The mapping of MAS into VLSI is opened by a review of sine computation techniques. Then the functional specification and high-level design of a conceptual MAS Coprocessor (MASC) is developed which functions with high autonomy in a loosely-coupled master- slave configuration with a Host CPU which executes filterbanks in software. Standard hardware optimisation techniques are used, such as pipelining, based upon the principle of an application-specific memory hierarchy which maximises MASC throughput

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201

    Design and multiplier-less implementation of a class of two-channel PR FIR filterbanks and wavelets with low system delay

    Get PDF
    In this paper, a new method for designing two-channel PR FIR filterbanks with low system delay is proposed. It is based on the generalization of the structure previously proposed by Phoong et al. Such structurally PR filterbanks are parameterized by two functions (β(z) and α(z)) that can be chosen as linear-phase FIR or allpass functions to construct FIR/IIR filterbanks with good frequency characteristics. The case of using identical β(z) and α(z) was considered by Phoong et al. with the delay parameter M chosen as 2N - 1. In this paper, the more general case of using different nonlinear-phase FIR functions for β(z) and α(z) is studied. As the linear-phase constraint is relaxed, the lengths of β(z) and α(z) are no longer restricted by the delay parameters of the filterbanks. Hence, higher stopband attenuation can still be achieved at low system delay. The design of the proposed low-delay filterbanks is formulated as a complex polynomial approximation problem, which can be solved by the Remez exchange algorithm or analytic formula with very low complexity. In addition, the orders and delay parameters can be estimated from the given filter specifications using a simple empirical formula. Therefore, low-delay two-channel PR filterbanks with flexible stopband attenuation and cutoff frequencies can be designed using existing filter design algorithms. The generalization of the present approach to the design of a class of wavelet bases associated with these low-delay filterbanks and its multiplier-less implementation using the sum of powers-of-two coefficients are also studied.published_or_final_versio

    Reconfigurable FPGA-Based Channelization Using Polyphase Filter Banks for Quantum Computing Systems

    Get PDF
    Recently proposed quantum systems use frequency multiplexed qubit technology for readout electronics rather than analog circuitry, to increase cost effectiveness of the system. In order to restore individual channels for further processing, these systems require a demultiplexing or channelization approach which can process high data rates with low latency and uses few hardware resources. In this paper, a low latency, adaptable, FPGA-based channelizer using the Polyphase Filter Bank (PFB) signal processing algorithm is presented. As only a single prototype lowpass filter needs to be designed to process all channels, PFBs can be easily adapted to different requirements and further allow for simplified filter design. Due to reutilization of the same filter for each channel they also reduce hardware resource utilization when compared to the traditional Digital Down Conversion approach. The realized system architecture is extensively generic, allowing the user to select from different numbers of channels, sample bit widths and throughput specifications. For a test setup using a 28 coefficient transpose filter and 4 output channels, the proposed architecture yields a throughput of 12.8 Gb/s with a latency of 7 clock cycles

    A Low-Power Two-Digit Multi-dimensional Logarithmic Number System Filterbank Architecture for a Digital Hearing Aid

    Get PDF
    This paper addresses the implementation of a filterbank for digital hearing aids using a multi-dimensional logarithmic number system (MDLNS). The MDLNS, which has similar properties to the classical logarithmic number system (LNS), provides more degrees of freedom than the LNS by virtue of having two, or more, orthogonal bases and the ability to use multiple MDLNS components or digits. The logarithmic properties of the MDLNS also allow for reduced complexity multiplication and large dynamic range, and a multiple-digit MDLNS provides a considerable reduction in hardware complexity compared to a conventional LNS approach. We discuss an improved design for a two-digit 2D MDLNS filterbank implementation which reduces power and area by over two times compared to the original design

    Spatial sound reproduction with frequency band processing of B-format audio signals

    Get PDF
    Lisääntynyt tietämys tilakuulon toimintaperiaatteista on mahdollistanut lukuisien tilaäänentoistoteknologioiden synnyn. Näihin lukeutuvat muiden muassa monikanavaäänen pakkaus, kanavakokoonpanon muunnokset sekä tilaäänen yleinen kanavariippumaton esitystapa. Directional Audio Coding (DirAC) on teknologia, jolla pyritään analysoimaan ja vastaanottopäässä syntetisoimaan havainnon kannalta oleelliset äänikentän ominaisuudet. Ihmisen tilakuulo toimii niinsanottujen vihjeiden avulla. Näitä ovat muiden muassa korviin saapuvien äänisignaalien keskinäiset erot sekä moniaistiset vihjeet kuten näköaistista saatava informaatio. DirAC:n tavoitteena on mitata äänitystilassa ja uudelleentuottaa kuuntelutilassa ne äänikentän ominaisuudet, jotka vaikuttavat kuuloaistiin liittyvien vihjeiden syntyyn. Yhdestä pisteestä mitattavasta hiukkasnopeudesta sekä äänenpaineesta voidaan laskea äänikentän hetkellinen intensiteetti ja energia taajuuskaistoittain. Näistä voidaan puolestaan selvittää äänen tulosuunta sekä diffuusisuus eli hajaantuneisuus. DirAC:n perusoletus on, että ihmisen suuntakuulon vihjeet muodostuvat näiden ominaisuuksien perusteella, äänen taajuus- ja aikarakenteen lisäksi. Toisin sanoen oletus on, että mikäli nämä ominaisuudet onnistutaan uudelleentuottamaan, kuulijan tulisi kokea kuulokokemus, joka vastaisi täysin sitä kuulokokemusta, joka olisi syntynyt alkuperäisessä mittaustilassakin. Reaaliaikainen lineaarivaiheiseen suodinpankkiin perustuva DirAC-ohjelmisto toteutettiin tutkimuksen yhteydessä. Kuuntelukokeet osoittivat, että riittävällä määrällä kaiuttimia sekä ideaalisella mikrofonilla DirAC:n kyky uudelleentuottaa tilaääntä oli erinomainen. 5.0-kotiteatterikokoonpanoa sekä Soundfield ST350 -mikrofonia käytettäessä laatu oli hyvä. Lisätutkimukset osoittivat, että ST350-mikrofonin toimivuus suunta-analyysissä heikkenee voimakkaasti taajuuksilla, jotka ylittävät 1,5-3 kHz.The increase of knowledge in the field of spatial hearing has given birth to various spatial audio reproduction technologies. These include efficient perceptual coding of multi-channel audio, channel conversion technologies and universal audio formats with no restrictions to any specific loudspeaker setup. Directional Audio Coding (DirAC) extends the scope of universal audio reproduction to real sound environments by utilizing existing microphones for analysis and arbitrary loudspeaker setups for synthesis of the perceptually relevant properties of the sound field. The human spatial hearing functions on the basis of multitude of cues. These cues range from the differences of the sound reaching both ears to the multimodal cues such as the visual cues. The goal of DirAC is to measure and synthesize those sound field properties by the influence of which the auditory cues arise, leaving only the multimodality out of scope. The particle velocity and the sound pressure in a single measurement point enable the calculation of the sound field intensity and the energy in frequency bands. From these, the direction of arrival and the sound field diffuseness can be formulated. The fundamental assumption of DirAC is that the human auditory cues arise by the influence of these sound field properties along with the monaural spectral and temporal properties. Therefore a successful re-synthesis of these properties is assumed to bring a spatial hearing experience identical to that of the original measurement space. A real-time linear phase filterbank version of DirAC was implemented. The reproduction quality of DirAC was shown to be excellent in formal listening tests if the number of loudspeakers is adequate and the microphone is ideal. The reproduction quality with standard 5.0 setup and Soundfield ST350 microphone was good. Additional experiments showed that the directional properties of the ST350 microphone collapse at frequencies above 1,5-3 kHz

    Efficient Algorithms for Immersive Audio Rendering Enhancement

    Get PDF
    Il rendering audio immersivo è il processo di creazione di un’esperienza sonora coinvolgente e realistica nello spazio 3D. Nei sistemi audio immersivi, le funzioni di trasferimento relative alla testa (head-related transfer functions, HRTFs) vengono utilizzate per la sintesi binaurale in cuffia poiché esprimono il modo in cui gli esseri umani localizzano una sorgente sonora. Possono essere introdotti algoritmi di interpolazione delle HRTF per ridurre il numero di punti di misura e per creare un movimento del suono affidabile. La riproduzione binaurale può essere eseguita anche dagli altoparlanti. Tuttavia, il coinvolgimento di due o più gli altoparlanti causa il problema del crosstalk. In questo caso, algoritmi di cancellazione del crosstalk (CTC) sono necessari per eliminare i segnali di interferenza indesiderati. In questa tesi, partendo da un'analisi comparativa di metodi di misura delle HRTF, viene proposto un sistema di rendering binaurale basato sull'interpolazione delle HRTF per applicazioni in tempo reale. Il metodo proposto mostra buone prestazioni rispetto a una tecnica di riferimento. L'algoritmo di interpolazione è anche applicato al rendering audio immersivo tramite altoparlanti, aggiungendo un algoritmo di cancellazione del crosstalk fisso, che considera l'ascoltatore in una posizione fissa. Inoltre, un sistema di cancellazione crosstalk adattivo, che include il tracciamento della testa dell'ascoltatore, è analizzato e implementato in tempo reale. Il CTC adattivo implementa una struttura in sottobande e risultati sperimentali dimostrano che un maggiore numero di bande migliora le prestazioni in termini di errore totale e tasso di convergenza. Il sistema di riproduzione e le caratteristiche dell'ambiente di ascolto possono influenzare le prestazioni a causa della loro risposta in frequenza non ideale. L'equalizzazione viene utilizzata per livellare le varie parti dello spettro di frequenze che compongono un segnale audio al fine di ottenere le caratteristiche sonore desiderate. L'equalizzazione può essere manuale, come nel caso dell'equalizzazione grafica, dove il guadagno di ogni banda di frequenza può essere modificato dall'utente, o automatica, la curva di equalizzazione è calcolata automaticamente dopo la misurazione della risposta impulsiva della stanza. L'equalizzazione della risposta ambientale può essere applicata anche ai sistemi multicanale, che utilizzano due o più altoparlanti e la zona di equalizzazione può essere ampliata misurando le risposte impulsive in diversi punti della zona di ascolto. In questa tesi, GEQ efficienti e un sistema adattativo di equalizzazione d'ambiente. In particolare, sono proposti e approfonditi tre equalizzatori grafici a basso costo computazionale e a fase lineare e quasi lineare. Gli esperimenti confermano l'efficacia degli equalizzatori proposti in termini di accuratezza, complessità computazionale e latenza. Successivamente, una struttura adattativa in sottobande è introdotta per lo sviluppo di un sistema di equalizzazione d'ambiente multicanale. I risultati sperimentali verificano l'efficienza dell'approccio in sottobande rispetto al caso a banda singola. Infine, viene presentata una rete crossover a fase lineare per sistemi multicanale, mostrando ottimi risultati in termini di risposta in ampiezza, bande di transizione, risposta polare e risposta in fase. I sistemi di controllo attivo del rumore (ANC) possono essere progettati per ridurre gli effetti dell'inquinamento acustico e possono essere utilizzati contemporaneamente a un sistema audio immersivo. L'ANC funziona creando un'onda sonora in opposizione di fase rispetto all'onda sonora in arrivo. Il livello sonoro complessivo viene così ridotto grazie all'interferenza distruttiva. Infine, questa tesi presenta un sistema ANC utilizzato per la riduzione del rumore. L’approccio proposto implementa una stima online del percorso secondario e si basa su filtri adattativi in sottobande applicati alla stima del percorso primario che mirano a migliorare le prestazioni dell’intero sistema. La struttura proposta garantisce un tasso di convergenza migliore rispetto all'algoritmo di riferimento.Immersive audio rendering is the process of creating an engaging and realistic sound experience in 3D space. In immersive audio systems, the head-related transfer functions (HRTFs) are used for binaural synthesis over headphones since they express how humans localize a sound source. HRTF interpolation algorithms can be introduced for reducing the number of measurement points and creating a reliable sound movement. Binaural reproduction can be also performed by loudspeakers. However, the involvement of two or more loudspeakers causes the problem of crosstalk. In this case, crosstalk cancellation (CTC) algorithms are needed to delete unwanted interference signals. In this thesis, starting from a comparative analysis of HRTF measurement techniques, a binaural rendering system based on HRTF interpolation is proposed and evaluated for real-time applications. The proposed method shows good performance in comparison with a reference technique. The interpolation algorithm is also applied for immersive audio rendering over loudspeakers, by adding a fixed crosstalk cancellation algorithm, which assumes that the listener is in a fixed position. In addition, an adaptive crosstalk cancellation system, which includes the tracking of the listener's head, is analyzed and a real-time implementation is presented. The adaptive CTC implements a subband structure and experimental results prove that a higher number of bands improves the performance in terms of total error and convergence rate. The reproduction system and the characteristics of the listening room may affect the performance due to their non-ideal frequency response. Audio equalization is used to adjust the balance of different audio frequencies in order to achieve desired sound characteristics. The equalization can be manual, such as in the case of graphic equalization, where the gain of each frequency band can be modified by the user, or automatic, where the equalization curve is automatically calculated after the room impulse response measurement. The room response equalization can be also applied to multichannel systems, which employ two or more loudspeakers, and the equalization zone can be enlarged by measuring the impulse responses in different points of the listening zone. In this thesis, efficient graphic equalizers (GEQs), and an adaptive room response equalization system are presented. In particular, three low-complexity linear- and quasi-linear-phase graphic equalizers are proposed and deeply examined. Experiments confirm the effectiveness of the proposed GEQs in terms of accuracy, computational complexity, and latency. Successively, a subband adaptive structure is introduced for the development of a multichannel and multiple positions room response equalizer. Experimental results verify the effectiveness of the subband approach in comparison with the single-band case. Finally, a linear-phase crossover network is presented for multichannel systems, showing great results in terms of magnitude flatness, cutoff rates, polar diagram, and phase response. Active noise control (ANC) systems can be designed to reduce the effects of noise pollution and can be used simultaneously with an immersive audio system. The ANC works by creating a sound wave that has an opposite phase with respect to the sound wave of the unwanted noise. The additional sound wave creates destructive interference, which reduces the overall sound level. Finally, this thesis presents an ANC system used for noise reduction. The proposed approach implements an online secondary path estimation and is based on cross-update adaptive filters applied to the primary path estimation that aim at improving the performance of the whole system. The proposed structure allows for a better convergence rate in comparison with a reference algorithm

    Spectrogram inversion and potential applications for hearing research

    Get PDF
    corecore