217 research outputs found

    Bandwidth extension in high-quality audio coding

    Get PDF
    Langattoman viestinnän siirtonopeudet ovat tällä hetkellä varsin rajalliset. Kehittyneiden koodausmenetelmien avulla siirrettävä tieto saadaan pakattua pienempään muistiin. Äänisignaalien pakkaaminen on edistynyt paljon kuluneina vuosina, ja yksi tämän mahdollistanut menetelmä on ollut kaistanlaajennus. Tässä diplomityössä tutkitaan nykyisiä kaistanlaajennusmenetelmiä. Menetelmiä analysoidaan tavoitteena löytää keinoja niiden parantamiseen. Diplomityön tekijä on kehittänyt kaksi uutta menetelmää. Modifioituun diskreettiin kosinimuunnokseen (MDCT) perustuvaa menetelmää on käytetty selvittämään kuinka eri parametrit vaikuttavat kaistanlaajennuksen lopputulokseen. Toinen menetelmä käyttää lineaarista ennustusta (LPC) äänisignaalin ominaisuuksien mallintamiseen. Uusia menetelmiä vertailtiin toisiinsa ja yhteen olemassaolevaan menetelmään kuuntelutesteissä. Testien tuloksina todettiin, että uusia menetelmiä voidaan käyttää pakkaustehokkuuden parantamiseen korkealaatuisessa audiokoodauksessa.In mobile telecommunications the transmission speed is currently very limited. Advanced coding methods are used to pack transmitted information into a smaller memory space. The coding of audio signals has developed a lot in the past years, and one method enabling better coding efficiency has been bandwidth extension. In this thesis, the current bandwidth extension methods are studied, and analysis is made to find out if the methods could be improved. Two new methods have been developed by the author. A method based on modified discrete cosine transform (MDCT) has been used to examine how different parameters affect the result of bandwidth extension. The second method uses linear prediction (LPC) in modeling the properties of audio signals. The new methods were compared against each other and one previous bandwidth extension method in listening tests. The results of the tests were that the new methods can be used to improve coding efficiency in high-quality audio coding

    An Application of Spectral Translation and Spectral Envelope Extrapolation for High-frequency Bandwidth Extension of Generic Audio Signals

    Get PDF
    The scope of this work is to introduce a conceptually simple yet effective algorithm for blind high-frequency bandwidth extension of audio signals, a means of improving perceptual quality for sound which has been previously low-pass filtered or downsampled (typically due to storage considerations). The algorithm combines an application of the modulation theorem for discrete Fourier transform to regenerate the missing high-frequency end of the signal spectrum with a linear-regression-driven approach to shape the spectral envelope for the regenerated band. The results are graphically and acoustically compared to those obtained with existing audio restoration software for a variety of input signals. The source code and Windows binaries of the resulting algorithm implementation are also included

    Measurements in Perceptual Annoyance of Audio Coding Artifacts

    Get PDF
    Tässä diplomityössä tutkitaan matalan bittinopeuden puhe- ja audiokooderin USACin kehityksessä merkittäväksi koettujen koodausartifaktien psykoakustista ärsyttävyyttä. Tutkielmassa käsitellään neljää ilmiötä, jotka on eritelty alempana. Artifaktit mallinnettiin MATLAB(R)-ohjelmistolla ja niiden ärsyttävyyttä arvioitiin kuuntelukokein. Työn toimeksiantaja on saksalainen Fraunhofer-instituutti, joka tunnetaan muun muassa MP3-koodekin kehittäjänä. Audionkoodauksessa signaaleja käsitellään yleensä noin 20-50 millisekunnin pituisina kehyksinä, jolloin koodausartifaktit voivat vaihdella nopeastikin. Tämän ilmiön ärsyttävyyttä tutkittiin varioimalla kapeakaistaisen kohinan sekä yksittäisten harmonisten voimakkuutta eri nopeuksilla. Koetulosten perusteella keskinopea vaihtelu koetaan ärsyttävimmäksi. Harmoninen kaistanleveyden laajennus (harmonic bandwidth extension) on menetelmä, jolla voidaan luoda harmonisia komponentteja rajataajuuden yläpuolelle alkuperäistä spektriä venyttämällä. Näin audiosignaalin bittinopeutta voidaan laskea, kun ylimpiä harmonisia ei tarvitse koodata eksplisiittisesti, vaan ne voidaan generoida dekoodauksessa. Koska luotujen harmonisisten joukko on kuitenkin aina puutteellinen, saattaa syntyä vaikutelma ylimääräisestä sävelkorkeudesta (ghost pitch). Kuuntelukokeessa tutkittiin synteettisillä äänillä, miten tämän ilmiön voimakkuus riippuu äänen perustaajuudesta ja valitusta rajataajuudesta. Kuulon peittokäyrää voidaan approksimoida tehokkaasti spektrin verhokäyrällä, jota käyttäen itse signaalikehys voidaan siirtää perkeptuaaliseen alueeseen kvantisoitavaksi. Kvantisointikohinan peittymistä voidaan tehostaa säätämällä verhokäyrän pehmeyttä sen siirtofunktioon sijoitetulla vakiolla. Työssä esitetään ehdotus tämän parametrin arvoksi. Sopivasti muokattua verhokäyrää voidaan käyttää myös spektrin voimakkaiden osien vahvistamiseen ja heikkojen osien vaimentamiseen. Puhesignaaleilla huomattiin, että tällä formanttien korostamisella voidaan peittää kvantisointikohinaa, mutta samalla sointiväri muuttuu epäluonnollisemmaksi. Tekstissä esitetään malli optimaalisten muokkausvakioiden valitsemiseksi perkeptuaalisen signaali-kohinasuhteen funktiona.This thesis discusses the perceptual annoyance of several audio coding artifacts that have become of interest during the development of USAC, a new low-bitrate speech and audio coder. A total of four different coding-related phenomena, all of which are explained below, were investigated in this study. All artifacts were artificially generated using MATLAB(R) and evaluated in listening tests with approximately ten participants in each. This work was commissioned by Fraunhofer IIS, Germany - a leader in audio coding technology and the home of MP3. In audio coding, signals are usually processed in frames with a length of 20 to 50 milliseconds, which may cause rapid variations in artifacts. In our tests, the level of critical-bandwidth noise or single harmonics was altered with various speeds. The results suggest that moderate-speed variations are considered the most annoying. Harmonic bandwidth extension is a method that generates artificial harmonics by stretching spectra in frequency. It is useful in audio compression because upper harmonics need not be encoded explicitly, but can be approximately reconstructed in the decoding phase. However, the generated harmonic patch will inevitably be incomplete, which may cause a false additional pitch sensation. The perceived strength of this ghost pitch was examined with synthetic tones as a function of fundamental and crossover frequencies. The masking curve of a signal frame can be efficiently modelled with a spectral envelope. It can then be used for transferring the frame to the perceptual domain for quantization. The resulting quantization noise will be less audible if the smoothness of the envelope is properly adjusted in the first place by modifying the transfer function with a constant. A proposal for the optimal constant value is provided in this study. Strong parts of a signal spectrum can be boosted and weak parts diminished by multiplying the spectrum with its modified envelope. This technique, known as formant enhancement, enables a better masking of quantization noise, but tends to render the overall tone unnatural. A model for selecting the optimal spectrum modification parameter values as a function of perceptual signal-to-noise ratio is proposed

    Étude de transformées temps-fréquence pour le codage audio faible retard en haute qualité

    Get PDF
    In recent years there has been a phenomenal increase in the number of products and applications which make use of audio coding formats. Amongthe most successful audio coding schemes, the MPEG-1 Layer III (mp3), the MPEG-2 Advanced Audio Coding (AAC) or its evolution MPEG-4High Efficiency-Advanced Audio Coding (HE-AAC) can be cited. More recently, perceptual audio coding has been adapted to achieve codingat low-delay such to become suitable for conversational applications. Traditionally, the use of filter bank such as the Modified Discrete CosineTransform (MDCT) is a central component of perceptual audio coding and its adaptation to low delay audio coding has become an important researchtopic. Low delay transforms have been developed in order to retain the performance of standard audio coding while reducing dramatically the associated algorithmic delay.This work presents some elements allowing to better accommodate the delay reduction constraint. Among the contributions, a low delay blockswitching tool which allows the direct transition between long transform and short transform without the insertion of transition window. The sameprinciple has been extended to define new perfect reconstruction conditions for the MDCT with relaxed constraints compared to the original definition.As a consequence, a seamless reconstruction method has been derived to increase the flexibility of transform coding schemes with the possibility toselect a transform for a frame independently from its neighbouring frames. Finally, based on this new approach, a new low delay window design procedure has been derived to obtain an analytic definition for a new family of transforms, permitting high quality with a substantial coding delay reduction. The performance of the proposed transforms has been thoroughly evaluated, an evaluation framework involving an objective measurement of the optimal transform sequence is proposed. It confirms the relevance of the proposed transforms used for audio coding. In addition, the new approaches have been successfully applied to the recent standardisation work items, such as the low delay audio coding developed at MPEG (LD-AAC and ELD-AAC) and they have been evaluated with numerous subjective testing, showing a significant improvement of the quality for transient signals. The new low delay window design has been adopted in G.718, a scalable speech and audio codec standardized in ITU-T and has demonstrated its benefit in terms of delay reduction while maintaining the audio quality of a traditional MDCT.Codage audio à faible retard à l'aide de la définition de nouvelles fenêtres pour la transformée MDCT et l'introduction d'un nouveau schéma de commutation de fenêtre

    High Quality Audio Coding with MDCTNet

    Full text link
    We propose a neural audio generative model, MDCTNet, operating in the perceptually weighted domain of an adaptive modified discrete cosine transform (MDCT). The architecture of the model captures correlations in both time and frequency directions with recurrent layers (RNNs). An audio coding system is obtained by training MDCTNet on a diverse set of fullband monophonic audio signals at 48 kHz sampling, conditioned by a perceptual audio encoder. In a subjective listening test with ten excerpts chosen to be balanced across content types, yet stressful for both codecs, the mean performance of the proposed system for 24 kb/s variable bitrate (VBR) is similar to that of Opus at twice the bitrate.Comment: Five pages, five figure

    Apprentissage automatique pour le codage cognitif de la parole

    Get PDF
    Depuis les années 80, les codecs vocaux reposent sur des stratégies de codage à court terme qui fonctionnent au niveau de la sous-trame ou de la trame (généralement 5 à 20 ms). Les chercheurs ont essentiellement ajusté et combiné un nombre limité de technologies disponibles (transformation, prédiction linéaire, quantification) et de stratégies (suivi de forme d'onde, mise en forme du bruit) pour construire des architectures de codage de plus en plus complexes. Dans cette thèse, plutôt que de s'appuyer sur des stratégies de codage à court terme, nous développons un cadre alternatif pour la compression de la parole en codant les attributs de la parole qui sont des caractéristiques perceptuellement importantes des signaux vocaux. Afin d'atteindre cet objectif, nous résolvons trois problèmes de complexité croissante, à savoir la classification, la prédiction et l'apprentissage des représentations. La classification est un élément courant dans les conceptions de codecs modernes. Dans un premier temps, nous concevons un classifieur pour identifier les émotions, qui sont parmi les attributs à long terme les plus complexes de la parole. Dans une deuxième étape, nous concevons un prédicteur d'échantillon de parole, qui est un autre élément commun dans les conceptions de codecs modernes, pour mettre en évidence les avantages du traitement du signal de parole à long terme et non linéaire. Ensuite, nous explorons les variables latentes, un espace de représentations de la parole, pour coder les attributs de la parole à court et à long terme. Enfin, nous proposons un réseau décodeur pour synthétiser les signaux de parole à partir de ces représentations, ce qui constitue notre dernière étape vers la construction d'une méthode complète de compression de la parole basée sur l'apprentissage automatique de bout en bout. Bien que chaque étape de développement proposée dans cette thèse puisse faire partie d'un codec à elle seule, chaque étape fournit également des informations et une base pour la prochaine étape de développement jusqu'à ce qu'un codec entièrement basé sur l'apprentissage automatique soit atteint. Les deux premières étapes, la classification et la prédiction, fournissent de nouveaux outils qui pourraient remplacer et améliorer des éléments des codecs existants. Dans la première étape, nous utilisons une combinaison de modèle source-filtre et de machine à état liquide (LSM), pour démontrer que les caractéristiques liées aux émotions peuvent être facilement extraites et classées à l'aide d'un simple classificateur. Dans la deuxième étape, un seul réseau de bout en bout utilisant une longue mémoire à court terme (LSTM) est utilisé pour produire des trames vocales avec une qualité subjective élevée pour les applications de masquage de perte de paquets (PLC). Dans les dernières étapes, nous nous appuyons sur les résultats des étapes précédentes pour concevoir un codec entièrement basé sur l'apprentissage automatique. un réseau d'encodage, formulé à l'aide d'un réseau neuronal profond (DNN) et entraîné sur plusieurs bases de données publiques, extrait et encode les représentations de la parole en utilisant la prédiction dans un espace latent. Une approche d'apprentissage non supervisé basée sur plusieurs principes de cognition est proposée pour extraire des représentations à partir de trames de parole courtes et longues en utilisant l'information mutuelle et la perte contrastive. La capacité de ces représentations apprises à capturer divers attributs de la parole à court et à long terme est démontrée. Enfin, une structure de décodage est proposée pour synthétiser des signaux de parole à partir de ces représentations. L'entraînement contradictoire est utilisé comme une approximation des mesures subjectives de la qualité de la parole afin de synthétiser des échantillons de parole à consonance naturelle. La haute qualité perceptuelle de la parole synthétisée ainsi obtenue prouve que les représentations extraites sont efficaces pour préserver toutes sortes d'attributs de la parole et donc qu'une méthode de compression complète est démontrée avec l'approche proposée.Abstract: Since the 80s, speech codecs have relied on short-term coding strategies that operate at the subframe or frame level (typically 5 to 20ms). Researchers essentially adjusted and combined a limited number of available technologies (transform, linear prediction, quantization) and strategies (waveform matching, noise shaping) to build increasingly complex coding architectures. In this thesis, rather than relying on short-term coding strategies, we develop an alternative framework for speech compression by encoding speech attributes that are perceptually important characteristics of speech signals. In order to achieve this objective, we solve three problems of increasing complexity, namely classification, prediction and representation learning. Classification is a common element in modern codec designs. In a first step, we design a classifier to identify emotions, which are among the most complex long-term speech attributes. In a second step, we design a speech sample predictor, which is another common element in modern codec designs, to highlight the benefits of long-term and non-linear speech signal processing. Then, we explore latent variables, a space of speech representations, to encode both short-term and long-term speech attributes. Lastly, we propose a decoder network to synthesize speech signals from these representations, which constitutes our final step towards building a complete, end-to-end machine-learning based speech compression method. The first two steps, classification and prediction, provide new tools that could replace and improve elements of existing codecs. In the first step, we use a combination of source-filter model and liquid state machine (LSM), to demonstrate that features related to emotions can be easily extracted and classified using a simple classifier. In the second step, a single end-to-end network using long short-term memory (LSTM) is shown to produce speech frames with high subjective quality for packet loss concealment (PLC) applications. In the last steps, we build upon the results of previous steps to design a fully machine learning-based codec. An encoder network, formulated using a deep neural network (DNN) and trained on multiple public databases, extracts and encodes speech representations using prediction in a latent space. An unsupervised learning approach based on several principles of cognition is proposed to extract representations from both short and long frames of data using mutual information and contrastive loss. The ability of these learned representations to capture various short- and long-term speech attributes is demonstrated. Finally, a decoder structure is proposed to synthesize speech signals from these representations. Adversarial training is used as an approximation to subjective speech quality measures in order to synthesize natural-sounding speech samples. The high perceptual quality of synthesized speech thus achieved proves that the extracted representations are efficient at preserving all sorts of speech attributes and therefore that a complete compression method is demonstrated with the proposed approach

    Audio Inpainting

    Get PDF
    (c) 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Published version: IEEE Transactions on Audio, Speech and Language Processing 20(3): 922-932, Mar 2012. DOI: 10.1090/TASL.2011.2168211

    Scalable and perceptual audio compression

    Get PDF
    This thesis deals with scalable perceptual audio compression. Two scalable perceptual solutions as well as a scalable to lossless solution are proposed and investigated. One of the scalable perceptual solutions is built around sinusoidal modelling of the audio signal whilst the other is built on a transform coding paradigm. The scalable coders are shown to scale both in a waveform matching manner as well as a psychoacoustic manner. In order to measure the psychoacoustic scalability of the systems investigated in this thesis, the similarity between the original signal\u27s psychoacoustic parameters and that of the synthesized signal are compared. The psychoacoustic parameters used are loudness, sharpness, tonahty and roughness. This analysis technique is a novel method used in this thesis and it allows an insight into the perceptual distortion that has been introduced by any coder analyzed in this manner

    A generic audio classification and segmentation approach for multimedia indexing and retrieval

    Full text link
    • …
    corecore