186,721 research outputs found

    Micro protocol engineering for unstructured carriers: On the embedding of steganographic control protocols into audio transmissions

    Full text link
    Network steganography conceals the transfer of sensitive information within unobtrusive data in computer networks. So-called micro protocols are communication protocols placed within the payload of a network steganographic transfer. They enrich this transfer with features such as reliability, dynamic overlay routing, or performance optimization --- just to mention a few. We present different design approaches for the embedding of hidden channels with micro protocols in digitized audio signals under consideration of different requirements. On the basis of experimental results, our design approaches are compared, and introduced into a protocol engineering approach for micro protocols.Comment: 20 pages, 7 figures, 4 table

    A low-power, high-performance speech recognition accelerator

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatic Speech Recognition (ASR) is becoming increasingly ubiquitous, especially in the mobile segment. Fast and accurate ASR comes at high energy cost, not being affordable for the tiny power-budgeted mobile devices. Hardware acceleration reduces energy-consumption of ASR systems, while delivering high-performance. In this paper, we present an accelerator for largevocabulary, speaker-independent, continuous speech-recognition. It focuses on the Viterbi search algorithm representing the main bottleneck in an ASR system. The proposed design consists of innovative techniques to improve the memory subsystem, since memory is the main bottleneck for performance and power in these accelerators' design. It includes a prefetching scheme tailored to the needs of ASR systems that hides main memory latency for a large fraction of the memory accesses, negligibly impacting area. Additionally, we introduce a novel bandwidth-saving technique that removes off-chip memory accesses by 20 percent. Finally, we present a power saving technique that significantly reduces the leakage power of the accelerators scratchpad memories, providing between 8.5 and 29.2 percent reduction in entire power dissipation. Overall, the proposed design outperforms implementations running on the CPU by orders of magnitude, and achieves speedups between 1.7x and 5.9x for different speech decoders over a highly optimized CUDA implementation running on Geforce-GTX-980 GPU, while reducing the energy by 123-454x.Peer ReviewedPostprint (author's final draft

    Attention control comparisons with SLT for people with aphasia following stroke: methodological concerns raised following a systematic review

    Get PDF
    Objective: Attention control comparisons in trials of stroke rehabilitation require care to minimize the risk of comparison choice bias. We compared the similarities and differences in SLT and social support control interventions for people with aphasia. Data sources: Trial data from the 2016 Cochrane systematic review of SLT for aphasia after stroke. Methods: Direct and indirect comparisons between SLT, social support and no therapy controls. We double-data extracted intervention details using the template for intervention description and replication. Standardized mean differences and risk ratios (95% confidence intervals (CIs)) were calculated. Results: Seven trials compared SLT with social support (n  =  447). Interventions were matched in format, frequency, intensity, duration and dose. Procedures and materials were often shared across interventions. Social support providers received specialist training and support. Targeted language rehabilitation was only described in therapy interventions. Higher drop-out (P  =  0.005, odds ratio (OR) 0.51, 95% CI 0.32–0.81) and non-adherence to social support interventions (P  <  0.00001, OR 0.18, 95% CI 0.09–0.37) indicated an imbalance in completion rates increasing the risk of control comparison bias. Conclusion: Distinctions between social support and therapy interventions were eroded. Theoretically based language rehabilitation was the remaining difference in therapy interventions. Social support is an important adjunct to formal language rehabilitation. Therapists should continue to enable those close to the person with aphasia to provide tailored communication support, functional language stimulation and opportunities to apply rehabilitation gains. Systematic group differences in completion rates is a design-related risk of bias in outcomes observed

    The role of avatars in e-government interfaces

    Get PDF
    This paper investigates the use of avatars to communicate live message in e-government interfaces. A comparative study is presented that evaluates the contribution of multimodal metaphors (including avatars) to the usability of interfaces for e-government and user trust. The communication metaphors evaluated included text, earcons, recorded speech and avatars. The experimental platform used for the experiment involved two interface versions with a sample of 30 users. The results demonstrated that the use of multimodal metaphors in an e-government interface can significantly contribute to enhancing the usability and increase trust of users to the e-government interface. A set of design guidelines, for the use of multimodal metaphors in e-government interfaces, was also produced

    Broadband DOA estimation using Convolutional neural networks trained with noise signals

    Full text link
    A convolution neural network (CNN) based classification method for broadband DOA estimation is proposed, where the phase component of the short-time Fourier transform coefficients of the received microphone signals are directly fed into the CNN and the features required for DOA estimation are learnt during training. Since only the phase component of the input is used, the CNN can be trained with synthesized noise signals, thereby making the preparation of the training data set easier compared to using speech signals. Through experimental evaluation, the ability of the proposed noise trained CNN framework to generalize to speech sources is demonstrated. In addition, the robustness of the system to noise, small perturbations in microphone positions, as well as its ability to adapt to different acoustic conditions is investigated using experiments with simulated and real data.Comment: Published in Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) 201

    Automatic Response Assessment in Regions of Language Cortex in Epilepsy Patients Using ECoG-based Functional Mapping and Machine Learning

    Full text link
    Accurate localization of brain regions responsible for language and cognitive functions in Epilepsy patients should be carefully determined prior to surgery. Electrocorticography (ECoG)-based Real Time Functional Mapping (RTFM) has been shown to be a safer alternative to the electrical cortical stimulation mapping (ESM), which is currently the clinical/gold standard. Conventional methods for analyzing RTFM signals are based on statistical comparison of signal power at certain frequency bands. Compared to gold standard (ESM), they have limited accuracies when assessing channel responses. In this study, we address the accuracy limitation of the current RTFM signal estimation methods by analyzing the full frequency spectrum of the signal and replacing signal power estimation methods with machine learning algorithms, specifically random forest (RF), as a proof of concept. We train RF with power spectral density of the time-series RTFM signal in supervised learning framework where ground truth labels are obtained from the ESM. Results obtained from RTFM of six adult patients in a strictly controlled experimental setup reveal the state of the art detection accuracy of 78%\approx 78\% for the language comprehension task, an improvement of 23%23\% over the conventional RTFM estimation method. To the best of our knowledge, this is the first study exploring the use of machine learning approaches for determining RTFM signal characteristics, and using the whole-frequency band for better region localization. Our results demonstrate the feasibility of machine learning based RTFM signal analysis method over the full spectrum to be a clinical routine in the near future.Comment: This paper will appear in the Proceedings of IEEE International Conference on Systems, Man and Cybernetics (SMC) 201

    Research on Architectures for Integrated Speech/Language Systems in Verbmobil

    Get PDF
    The German joint research project Verbmobil (VM) aims at the development of a speech to speech translation system. This paper reports on research done in our group which belongs to Verbmobil's subproject on system architectures (TP15). Our specific research areas are the construction of parsers for spontaneous speech, investigations in the parallelization of parsing and to contribute to the development of a flexible communication architecture with distributed control.Comment: 6 pages, 2 Postscript figure
    corecore