1,014 research outputs found

    Can ultrasonic doppler help detecting nasality for silent speech interfaces?: An exploratory analysis based on alignement of the doppler signal with velum aperture information from real-time MRI

    Get PDF
    This paper describes an exploratory analysis on the usefulness of the information made available from Ultrasonic Doppler signal data collected from a single speaker, to detect velum movement associated to European Portuguese nasal vowels. This is directly related to the unsolved problem of detecting nasality in silent speech interfaces. The applied procedure uses Real-Time Magnetic Resonance Imaging (RT-MRI), collected from the same speaker providing a method to interpret the reflected ultrasonic data. By ensuring compatible scenario conditions and proper time alignment between the Ultrasonic Doppler signal data and the RT-MRI data, we are able to accurately estimate the time when the velum moves and the type of movement under a nasal vowel occurrence. The combination of these two sources revealed a moderate relation between the average energy of frequency bands around the carrier, indicating a probable presence of velum information in the Ultrasonic Doppler signalinfo:eu-repo/semantics/acceptedVersio

    Gesture Detection Using Doppler Sonar

    Get PDF
    While some computing devices include specialized hardware for gesture detection that enables hands-free operation, many commodity devices such as laptops do not include such hardware. This disclosure describes techniques, implemented with user permission, to automatically detect gestures performed in proximity of commodity devices at no additional cost by employing the on-device speakers and microphone as a Doppler sonar (sound navigation and ranging). The on-device speaker(s) generate an ultrasonic ping signal inaudible to the human ear and the device microphone(s) capture reflections from the user’s body parts. The type of Doppler shift in the reflected signal can indicate the direction of motion of the body part. The signal is provided as input to a trained classifier which can map the detected signal to the type of gesture the user is making. The described techniques can identify gestures quickly, thus enabling a smooth user experience for gesture-based interaction

    Multimodal corpora for silent speech interaction

    Get PDF
    A Silent Speech Interface (SSI) allows for speech communication to take place in the absence of an acoustic signal. This type of interface is an alternative to conventional Automatic Speech Recognition which is not adequate for users with some speech impairments or in the presence of environmental noise. The work presented here produces the conditions to explore and analyze complex combinations of input modalities applicable in SSI research. By exploring non-invasive and promising modalities, we have selected the following sensing technologies used in human-computer interaction: Video and Depth input, Ultrasonic Doppler sensing and Surface Electromyography. This paper describes a novel data collection methodology where these independent streams of information are synchronously acquired with the aim of supporting research and development of a multimodal SSI. The reported recordings were divided into two rounds: a first one where the acquired data was silently uttered and a second round where speakers pronounced the scripted prompts in an audible and normal tone. In the first round of recordings, a total of 53.94 minutes were captured where 30.25% was estimated to be silent speech. In the second round of recordings, a total of 30.45 minutes were obtained and 30.05% of the recordings were audible speech.info:eu-repo/semantics/publishedVersio

    Machine Learning and Signal Processing Design for Edge Acoustic Applications

    Get PDF

    Machine Learning and Signal Processing Design for Edge Acoustic Applications

    Get PDF

    Techniques for Detecting and Classifying User Behavior Through the Fusion of Ultrasonic Proximity Data and Doppler-Shift Velocity Data

    Get PDF
    This publication describes techniques relating to the detection and classification of user behavior through the fusion of ultrasonic proximity data and Doppler-shift velocity data on a computing device. Through the use of an ultrasonic proximity sensor and, in aspects, a radar sensor, user position and velocity can be combined and analyzed by the computing device to provide accurate user behavior predictions. These user behavior predictions may be used by the computing device to identify unique user behavior features and perform actions based on user behavior identification

    Towards a Multimodal Silent Speech Interface for European Portuguese

    Get PDF
    Automatic Speech Recognition (ASR) in the presence of environmental noise is still a hard problem to tackle in speech science (Ng et al., 2000). Another problem well described in the literature is the one concerned with elderly speech production. Studies (Helfrich, 1979) have shown evidence of a slower speech rate, more breaks, more speech errors and a humbled volume of speech, when comparing elderly with teenagers or adults speech, on an acoustic level. This fact makes elderly speech hard to recognize, using currently available stochastic based ASR technology. To tackle these two problems in the context of ASR for HumanComputer Interaction, a novel Silent Speech Interface (SSI) in European Portuguese (EP) is envisioned.info:eu-repo/semantics/acceptedVersio

    Personal Identification Using Ultrawideband Radar Measurement of Walking and Sitting Motions and a Convolutional Neural Network

    Full text link
    This study proposes a personal identification technique that applies machine learning with a two-layered convolutional neural network to spectrogram images obtained from radar echoes of a target person in motion. The walking and sitting motions of six participants were measured using an ultrawideband radar system. Time-frequency analysis was applied to the radar signal to generate spectrogram images containing the micro-Doppler components associated with limb movements. A convolutional neural network was trained using the spectrogram images with personal labels to achieve radar-based personal identification. The personal identification accuracies were evaluated experimentally to demonstrate the effectiveness of the proposed technique.Comment: 9 pages, 7 figures, and 3 table

    In-Suit Doppler Technology Assessment

    Get PDF
    The objective of this program was to perform a technology assessment survey of non-invasive air embolism detection utilizing Doppler ultrasound methodologies. The primary application of this technology will be a continuous monitor for astronauts while performing extravehicular activities (EVA's). The technology assessment was to include: (1) development of a full understanding of all relevant background research; and (2) a survey of the medical ultrasound marketplace for expertise, information, and technical capability relevant to this development. Upon completion of the assessment, LSR was to provide an overview of technological approaches and R&D/manufacturing organizations
    • …
    corecore