6 research outputs found

    Audio Processing

    No full text
    An audio processing arrangement (200) comprises a plurality of audio sources (101, 102) generating input audio signals, a processing circuit (110) for deriving processed audio signals from the input audio signals, a combining circuit (120) for deriving a combined audio signalfrom the processed audio signals, and a control circuit (130) for controlling the processing circuit in order to maximize a power measure of the combined audio signal and for limiting a function of gains of the processed audio signals to a predetermined value. In accordance with the present invention, the audio processing arrangement (200) comprises a pre-processing circuit (140) for deriving pre-processed audio signals from the input audio signals to minimize a cross-correlation of interferences comprised in the input audio signals. The pre-processed signals are provided to the processing circuit (110) instead of the input audio signals

    Earphone arrangement and method of operation therefor

    Get PDF
    An earphone arrangement comprises a microphone (109) which generates a microphone signal and a sound transducer (101) which radiates a first sound component to a user's ear (103) in response to a drive signal. An acoustic channel (111) is further provided for channeling external sound so as to provide a second sound component to the user's ear (103). An acoustic valve (117) allows the attenuation of the acoustic channel (111) to be controlled in response to a valve control signal. A control circuit (105) generates the valve control signal in response to the microphone signal to provide a variable attenuation resulting in a mixed sound of the first sound component and the second sound component reaching the user's ear (103). The combined use of acoustic and e.g. electric signal paths allows improved performance and in particular allows a dynamic trade-off between open and closed earphone design characteristics with respect to external sound

    Robustness improvement of ultrasound-based sensor systems for speech communication

    No full text
    \u3cp\u3eIn recent years, auxiliary sensors have been employed to improve the robustness of emerging hands-free speech communication systems based on air-conduction microphones, especially in low signal-to-noise-ratio environments. One such sensor, based on ultrasound, captures articulatory movement information during speech production and has been used in a voice activity detector and also shown to improve the performance of speech recognizers. However, studies thus far have tested such sensors in ideal scenarios where only relevant ar-ticulatory information was assumed to be present. Therefore, in this paper the robustness of such sensors in realistic scenarios is investigated. Challenges arising from non-articulatory movements and other environmental influences captured by ultrasound sensors are discussed and strategies for their detection presented. Finally, the proposed strategies are evaluated in an ultrasound-based voice activity detector.\u3c/p\u3

    Learning Doppler with deep neural networks and its application to intra-cardiac echography

    No full text
    \u3cp\u3eCardiac ablation therapy is an effective treatment for atrial fibrillation and ventricular tachycardia that relies on the creation of electrically isolating scars, e.g. through heat. The ability to reliably visualize and assess the formation of these lesions during the procedure would greatly enhance the therapy's success rate and safety. Tissue Doppler echography enables measurement of tissue strain, and could therefore be used to monitor and quantify the stiffening of developing lesions. In tissue Doppler, the tradeoff between spatiotemporal resolution and estimation accuracy/precision is balanced by manually tweaking the fast-and slow-time range gates, with the optimal settings varying across measurements and desired clinical objectives. Convolutional neural networks have shown remarkable performance at learning to execute a large variety of signal and image processing tasks. In this work, we show how a deep neural network can be trained to robustly fulfil Doppler imaging functionality, which we term DopplerNet.\u3c/p\u3

    Learning sub-sampling and signal recovery with applications in ultrasound imaging

    No full text
    Limitations on bandwidth and power consumption impose strict bounds on data rates of diagnostic imaging systems. Consequently, the design of suitable (i.e. task- and data-aware) compression and reconstruction techniques has attracted considerable attention in recent years. Compressed sensing emerged as a popular framework for sparse signal reconstruction from a small set of compressed measurements. However, typical compressed sensing designs measure a (non)linearly weighted combination of all input signal elements, which poses practical challenges. These designs are also not necessarily task-optimal. In addition, real-time recovery is hampered by the iterative and time-consuming nature of sparse recovery algorithms. Recently, deep learning methods have shown promise for fast recovery from compressed measurements, but the design of adequate and practical sensing strategies remains a challenge. Here, we propose a deep learning solution termed Deep Probabilistic Sub-sampling (DPS), that learns a task-driven sub-sampling pattern, while jointly training a subsequent task model. Once learned, the task-based sub-sampling patterns are fixed and straightforwardly implementable, e.g. by non-uniform analog-to-digital conversion, sparse array design, or slow-time ultrasound pulsing schemes. The effectiveness of our framework is demonstrated in-silico for sparse signal recovery from partial Fourier measurements, and in-vivo for both anatomical image and tissue-motion (Doppler) reconstruction from sub-sampled medical ultrasound imaging data

    Audio enhancement

    No full text
    An audio enhancement device in accordance with the present invention comprises an extraction circuit (210) for extracting a first audio signal from a first input signaland a second audio signal from a second input signal, and extracting inter-channel parameters from atleast a part of the first audio signal and at least a part of the second audio signal, a processing circuit for processing (230) the first audio signal and the second audio 5 signalinto processed audio signals, and a re-instating circuit for re-instating (240) the inter- channel parameters into the processed audio signals resulting in a first output audio signal and a second output audio signal
    corecore