Article thumbnail
Location of Repository

Real-time implementation of a non-invasive tongue-based human-robot interface

By M. Mace, K. Mamun, R. Vaidyanathan, S. Wang and L. Gupta

Abstract

Real-time implementation of an assistive human-machine interface system based around tongue-movement ear pressure (TMEP) signals is presented, alongside results from a series of simulated control tasks. The implementation of this system into an online setting involves short-term energy calculation, detection, segmentation and subsequent signal classification, all of which had to be reformulated based on previous off-line testing. This has included the formulation of a new classification and feature extraction method. This scheme utilises the discrete cosine transform to extract the frequency features from the time domain information, a univariate Gaussian maximum likelihood classifier and a two phase cross-validation procedure for feature selection and extraction. The performance of this classifier is presented alongside a real-time implementation of the decision fusion classification algorithm, with each achieving 96.28% and 93.12% respectively. The system testing takes into consideration potential segmentation of false positive signals. A simulation mapping commands to a planar wheelchair demonstrates the capacity of the system for assistive robotic control. These are the first real-time results published for a tongue-based human-machine interface that does not require a transducer to be placed within the vicinity of the oral cavity.<br/><br/

Topics: QC, R1, TK
Publisher: IEEE
Year: 2010
DOI identifier: 10.1109/iros.2010.5648834
OAI identifier: oai:eprints.soton.ac.uk:178257
Provided by: e-Prints Soton
Sorry, our data provider has not provided any external links therefore we are unable to provide a link to the full text.

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.