106 research outputs found

    Parallel task in Subjective Audio Quality and Speech Intelligibility Assessments

    Get PDF
    Tato disertační práce se zabývá subjektivním testováním jak kvality řeči, tak i srozumitelnosti řeči, prozkoumává existující metody, určuje jejich základní principy a podstaty a porovnává jejich výhody a nevýhody. Práce také porovnává testy z hlediska různých parametrů a poskytuje moderní řešení pro již existující metody testování. První část práce se zabývá opakovatelností subjektivních testování provedených v ideálních laboratorních podmínkách. Takové úlohy opakovatelnosti se provádí použitím Pearsonové korelace, porovnání po párech a jinými matematickými analýzami. Tyto úlohy dokazují správnost postupů provedených subjektivních testů. Z tohoto důvodu byly provedeny čtyři subjektivní testy kvality řeči ve třech různých laboratořích. Získané výsledky potvrzují, že provedené testy byly vysoce opakovatelné a testovací požadavky byly striktně dodrženy. Dále byl proveden výzkum pro ověření významnosti subjektivních testování kvality řeči a srozumitelnosti řeči v komunikačních systémech. Za tímto účelem bylo analyzováno více než 16 miliónů záznamů živých hovorů přes VoIP telekomunikační sítě. Výsledky potvrdily základní předpoklad, že lepší uživatelská zkušenost působí delší trvání hovorů. Kromě dosažených hlavních výsledků však byly učiněny další důležité závěry. Dalším krokem disertační práce bylo prozkoumat techniku paralelních zátěží, existující přístupy a jejich výhody a nevýhody. Ukázalo se, že většina paralelních zátěží používaných v testech byla buď fyzicky, nebo mentálně orientovaná. Jelikož subjekty ve většině případů nejsou stejně fyzicky nebo mentálně zdatní, jejich výkony během úkolů nejsou stejné, takže výsledky nelze správně porovnat. V této disertační práci je navržen nový přístup, kdy jsou podmínky pro všechny subjekty stejné. Tento přístup představuje celou řadu úkolů, které zahrnují kombinaci mentálních a fyzických zátěží (simulátor laserové střelby, simulátor řízení auta, třídění předmětů apod.). Tyto metody byly použity v několika subjektivních testech kvality řeči a srozumitelnosti řeči. Závěry naznačují, že testy s paralelními zátěží mají realističtější výsledky než ty, které jsou prováděny v laboratorních podmínkách. Na základě výzkumu, zkušeností a dosažených výsledků byl Evropskému institutu pro normalizaci v telekomunikacích předložen nový standard s přehledem, příklady a doporučeními pro zajištění subjektivních testování kvality řeči a srozumitelnosti řeči. Standard byl přijat a publikován pod číslem ETSI TR 103 503.This thesis deals with the subjective testing of both speech quality and speech intelligibility, investigates the existing methods, record their main features, as well as advantages and disadvantages. The work also compares different tests in terms of various parameters and provides a modern solution for existing subjective testing methods. The first part of the research deals with the repeatability of subjective speech quality tests provided in perfect laboratory conditions. Such repeatability tasks are performed using Pearson correlations, pairwise comparison, and other mathematical analyses, and are meant to prove the correctness of procedures of provided subjective tests. For that reason, four subjective speech quality tests were provided in three different laboratories. The obtained results confirmed that the provided tests were highly repeatable, and the test requirements were strictly followed. Another research was done to verify the significance of speech quality and speech intelligibility tests in communication systems. To this end, more than 16 million live call records over VoIP telecommunications networks were analyzed. The results confirmed the primary assumption that better user experience brings longer call durations. However, alongside the main results, other valuable conclusions were made. The next step of the thesis was to investigate the parallel task technique, existing approaches, their advantages, and disadvantages. It turned out that the majority of parallel tasks used in tests were either physically or mentally oriented. As the subjects in most cases are not equally trained or intelligent, their performances during the tasks are not equal either, so the results could not be compared correctly. In this thesis, a novel approach is proposed where the conditions for all subjects are equal. The approach presents a variety of tasks, which include a mix of mental and physical tasks (laser-shooting simulator, car driving simulator, objects sorting, and others.). Afterward, the methods were used in several subjective speech quality and speech intelligibility tests. The results indicate that the tests with parallel tasks have more realistic values than the ones provided in laboratory conditions. Based on the research, experience, and achieved results, a new standard was submitted to the European Telecommunications Standards Institute with an overview, examples, and recommendations for providing subjective speech quality and speech intelligibility tests. The standard was accepted and published under the number ETSI TR 103 503

    Assessing the quality of audio and video components in desktop multimedia conferencing

    Get PDF
    This thesis seeks to address the HCI (Human-Computer Interaction) research problem of how to establish the level of audio and video quality that end users require to successfully perform tasks via networked desktop videoconferencing. There are currently no established HCI methods of assessing the perceived quality of audio and video delivered in desktop videoconferencing. The transport of real-time speech and video information across new digital networks causes novel and different degradations, problems and issues to those common in the traditional telecommunications areas (telephone and television). Traditional assessment methods involve the use of very short test samples, are traditionally conducted outside a task-based environment, and focus on whether a degradation is noticed or not. But these methods cannot help establish what audio-visual quality is required by users to perform tasks successfully with the minimum of user cost, in interactive conferencing environments. This thesis addresses this research gap by investigating and developing a battery of assessment methods for networked videoconferencing, suitable for use in both field trials and laboratory-based studies. The development and use of these new methods helps identify the most critical variables (and levels of these variables) that affect perceived quality, and means by which network designers and HCI practitioners can address these problems are suggested. The output of the thesis therefore contributes both methodological (i.e. new rating scales and data-gathering methods) and substantive (i.e. explicit knowledge about quality requirements for certain tasks) knowledge to the HCI and networking research communities on the subjective quality requirements of real-time interaction in networked videoconferencing environments. Exploratory research is carried out through an interleaved series of field trials and controlled studies, advancing substantive and methodological knowledge in an incremental fashion. Initial studies use the ITU-recommended assessment methods, but these are found to be unsuitable for assessing networked speech and video quality for a number of reasons. Therefore later studies investigate and establish a novel polar rating scale, which can be used both as a static rating scale and as a dynamic continuous slider. These and further developments of the methods in future lab- based and real conferencing environments will enable subjective quality requirements and guidelines for different videoconferencing tasks to be established

    New single-ended objective measure for non-intrusive speech quality evaluation

    Get PDF
    peer-reviewedThis article proposes a new output-based method for non-intrusive assessment of speech quality of voice communication systems and evaluates its performance. The method requires access to the processed (degraded) speech only, and is based on measuring perception-motivated objective auditory distances between the voiced parts of the output speech to appropriately matching references extracted from a pre-formulated codebook. The codebook is formed by optimally clustering a large number of parametric speech vectors extracted from a database of clean speech records. The auditory distances are then mapped into objective Mean Opinion listening quality scores. An efficient data-mining tool known as the self-organizing map (SOM) achieves the required clustering and mapping/reference matching processes. In order to obtain a perception-based, speaker-independent parametric representation of the speech, three domain transformation techniques have been investigated. The first technique is based on a perceptual linear prediction (PLP) model, the second utilises a bark spectrum (BS) analysis and the third utilises mel-frequency cepstrum coefficients (MFCC). Reported evaluation results show that the proposed method provides high correlation with subjective listening quality scores, yielding accuracy similar to that of the ITU-T P.563 while maintaining a relatively low computational complexity. Results also demonstrate that the method outperforms the PESQ in a number of distortion conditions, such as those of speech degraded by channel impairments.acceptedpeer-reviewe

    Analytic Assessment of Telephone Transmission Impact on ASR Performance Using a Simulation Model

    Get PDF
    This paper addresses the impact of telephone transmission channels on automatic speech recognition (ASR) performance. A real-time simulation model is described and implemented, which allows impairments that are encountered in traditional as well as modern (mobile, IP-based) networks to be flexibly and efficiently generated. The model is based on input parameters which are known to telephone network planners; thus, it can be applied without measuring specific network characteristics. It can be used for an analytic assessment of the impact of channel impairments on ASR performance, for producing training material with defined transmission characteristics, or for testing spoken dialogue systems in realistic network environments. In the present paper, we present an investigation of the first point. Two speech recognizers which are integrated into a spoken dialogue system for information retrieval are assessed in relation to controlled amounts of transmission degradations. The measured ASR performance degradation is compared to speech quality degradation in human-human communication. It turns out that different behavior can be expected for some impairments. This fact has to be taken into account in both telephone network planning as well as in speech and language technology development

    Adaptive video delivery using semantics

    Get PDF
    The diffusion of network appliances such as cellular phones, personal digital assistants and hand-held computers has created the need to personalize the way media content is delivered to the end user. Moreover, recent devices, such as digital radio receivers with graphics displays, and new applications, such as intelligent visual surveillance, require novel forms of video analysis for content adaptation and summarization. To cope with these challenges, we propose an automatic method for the extraction of semantics from video, and we present a framework that exploits these semantics in order to provide adaptive video delivery. First, an algorithm that relies on motion information to extract multiple semantic video objects is proposed. The algorithm operates in two stages. In the first stage, a statistical change detector produces the segmentation of moving objects from the background. This process is robust with regard to camera noise and does not need manual tuning along a sequence or for different sequences. In the second stage, feedbacks between an object partition and a region partition are used to track individual objects along the frames. These interactions allow us to cope with multiple, deformable objects, occlusions, splitting, appearance and disappearance of objects, and complex motion. Subsequently, semantics are used to prioritize visual data in order to improve the performance of adaptive video delivery. The idea behind this approach is to organize the content so that a particular network or device does not inhibit the main content message. Specifically, we propose two new video adaptation strategies. The first strategy combines semantic analysis with a traditional frame-based video encoder. Background simplifications resulting from this approach do not penalize overall quality at low bitrates. The second strategy uses metadata to efficiently encode the main content message. The metadata-based representation of object's shape and motion suffices to convey the meaning and action of a scene when the objects are familiar. The impact of different video adaptation strategies is then quantified with subjective experiments. We ask a panel of human observers to rate the quality of adapted video sequences on a normalized scale. From these results, we further derive an objective quality metric, the semantic peak signal-to-noise ratio (SPSNR), that accounts for different image areas and for their relevance to the observer in order to reflect the focus of attention of the human visual system. At last, we determine the adaptation strategy that provides maximum value for the end user by maximizing the SPSNR for given client resources at the time of delivery. By combining semantic video analysis and adaptive delivery, the solution presented in this dissertation permits the distribution of video in complex media environments and supports a large variety of content-based applications

    Towards improving ViSQOL (Virtual Speech Quality Objective Listener) Using Machine Learning Techniques

    Get PDF
    Vast amounts of sound data are transmitted every second over digital networks. VoIP services and cellular networks transmit speech data in increasingly greater volumes. Objective sound quality models provide an essential function to measure the quality of this data in real-time. However, these models can suffer from a lack of accuracy with various degradations over networks. This research uses machine learning techniques to create one support vector regression and three neural network mapping models for use with ViSQOLAudio. Each of the mapping models (including ViSQOL and ViSQOLAudio) are tested against two separate speech datasets in order to comparatively study accuracy results. Despite the slight cost in positive linear correlation and slight increase in error rate, the study finds that a neural network mapping model with ViSQOLAudio provides the highest levels of accuracy in objective speech quality measurement. In some cases, the accuracy levels can be over double that of ViSQOL. The research demonstrates that ViSQOLAudio can be altered to provide an objective speech quality metric greater than that of ViSQOL

    Predicting the Quality of Synthesized and Natural Speech Impaired by Packet Loss and Coding Using PESQ and P.563 Models

    Get PDF
    This paper investigates the impact of independent and dependent losses and coding on speech quality predictions provided by PESQ (also known as ITU-T P.862) and P.563 models, when both naturally-produced and synthesized speech are used. Two synthesized speech samples generated with two different Text-to-Speech systems and one naturally-produced sample are investigated. In addition, we assess the variability of PESQ’s and P.563’s predictions with respect to the type of speech used (naturally-produced or synthesized) and loss conditions as well as their accuracy, by comparing the predictions with subjective assessments. The results show that there is no difference between the impact of packet loss on naturally-produced speech and synthesized speech. On the other hand, the impact of coding is different for the two types of stimuli. In addition, synthesized speech seems to be insensitive to degradations provided by most of the codecs investigated here. The reasons for those findings are particularly discussed. Finally, it is concluded that both models are capable of predicting the quality of transmitted synthesized speech under the investigated conditions to a certain degree. As expected, PESQ achieves the best performance over almost all of the investigated conditions
    corecore