23 research outputs found

    Myoelectric Human Computer Interaction Using Reliable Temporal Sequence-based Myoelectric Classification for Dynamic Hand Gestures

    Get PDF
    To put a computerized device under human control, various interface techniques have been commonly studied in the realm of Human Computer Interaction (HCI) design. What this dissertation focuses on is a myoelectric interface, which controls a device via neuromuscular electrical signals. Myoelectric interface has advanced by recognizing repeated patterns of the signal (pattern recognition-based myoelectric classification). However, when the myoelectric classification is used to extract multiple discrete states within limited muscle sites, there are robustness issues due to external conditions: limb position changes, electrode shifts, and skin condition changes. Examined in this dissertation is the robustness issue, or drop in the performance of the myoelectric classification when the limb position varies from the position where the system was trained. Two research goals outlined in this dissertation are to increase reliability of myoelectric system and to build a myoelectric HCI to manipulate a 6-DOF robot arm with a 1-DOF gripper. To tackle the robustness issue, the proposed method uses dynamic motions which change their poses and configuration over time. The method assumes that using dynamic motions is more reliable, vis-a-vis the robustness issues, than using static motions. The robustness of the method is evaluated by choosing the training sets and validation sets at different limb positions. Next, an HCI system manipulating a 6-DOF robot arm with a 1-DOF gripper is introduced. The HCI system includes an inertia measurement unit to measure the limb orientation, as well as EMG sensors to acquire muscle force and to classify dynamic motions. Muscle force and the orientation of a forearm are used to generate velocity commands. Classified dynamic motions are used to change the manipulation modes. The performance of the myoelectric interface is measured in terms of real-time classification accuracy, path efficiency, and time-related measures. In conclusion, this dissertation proposes a reliable myoelectric classification and develops a myoelectric interface using the proposed classification method for an HCI application. The robustness of the proposed myoelectric classification is verified as compared to previous myoelectric classification approaches. The usability of the developed myoelectric interface is compared to a well-known interface

    Subject-independent modeling of sEMG signals for the motion of a single robot joint through GMM Modelization

    Get PDF
    This thesis evaluates the use of a probabilistic model, the Gaussian Mixture Model (GMM), trained through Electromyography (EMG) signals to estimate the bending angle of a single human joint. The GMM is created from the EMG signals collected by different people and the goal is to create a general model based on the data of different subjects. The model is then tested on new, unseen data. The goodness of the estimated data is evaluated by means of Normalized Mean Square Errorope

    Challenges and Trends of Machine Learning in the Myoelectric Control System for Upper Limb Exoskeletons and Exosuits

    Get PDF
    Myoelectric control systems as the emerging control strategies for upper limb wearable robots have shown their efficacy and applicability to effectively provide motion assistance and/or restore motor functions in people with impairment or disabilities, as well as augment physical performance in able-bodied individuals. In myoelectric control, electromyographic (EMG) signals from muscles are utilized, improving adaptability and human-robot interactions during various motion tasks. Machine learning has been widely applied in myoelectric control systems due to its advantages in detecting and classifying various human motions and motion intentions. This chapter illustrates the challenges and trends in recent machine learning algorithms implemented on myoelectric control systems designed for upper limb wearable robots, and highlights the key focus areas for future research directions. Different modalities of recent machine learning-based myoelectric control systems are described in detail, and their advantages and disadvantages are summarized. Furthermore, key design aspects and the type of experiments conducted to validate the efficacy of the proposed myoelectric controllers are explained. Finally, the challenges and limitations of current myoelectric control systems using machine learning algorithms are analyzed, from which future research directions are suggested

    From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models

    Full text link
    Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. To evaluate the performance of the proposed approach, we conducted experiments on 6 databases, including 5 publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach outperforms the sEMG-based unimodal HGR method (with increases of 2.15%-13.10%). It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly enhance the accuracy of sEMG-based HGR. The proposed approach represents a successful attempt to transition from unimodal HGR to multimodal HGR without additional sensor hardware

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Prototypical Arm Motions from Human Demonstration for Upper-Limb Prosthetic Device Control

    Get PDF
    Controlling a complex upper limb prosthesis, akin to a healthy arm, is still an open challenge due to the inadequate number of inputs available to amputees. Designs have therefore largely focused on a limited number of controllable degrees of freedom, developing a complex hand and grasp functionality rather than the wrist. This thesis investigates joint coordination based on human demonstrations that aims to vastly simplify the controls of wrist, elbow-wrist, and shoulder-elbow wrist devices.The wide range of motions performed by the human arm during daily tasks makes it desirable to find representative subsets to reduce the dimensionality of these movements for a variety of applications, including the design and control of robotic and prosthetic devices. Here I present the results of an extensive human subjects study and two methods that were used to obtain representative categories of arm use that span naturalistic motions during activities of daily living. First, I sought to identify sets of prototypical upper-limb motions that are functions of a single variable, allowing, for instance, an entire prosthetic or robotic arm to be controlled with a single input from a user, along with a means to select between motions for different tasks. Second, I decouple the orientation from the location of the hand and analyze the hand location in three ways and orientation in three reference frames. Both of these analyses are an application of data driven approaches that reduce the wide range of hand and arm use to a smaller representative set. Together these provide insight into our arm usage in daily life and inform an implementation in prosthetic or robotic devices without the need for additional hardware. To demonstrate the control efficacy of prototypical arm motions in upper-limb prosthetic devices, I developed an immersive virtual reality environment where able-bodied participants tested out different devices and controls. I coined prototypical arm motion control as trajectory control, and I found that as device complexity increased from 3 DOF wrist to 4 DOF elbow-wrist and 7 DOF shoulder-elbow-wrist, it enables users to complete tasks faster with a more intuitive interface without additional body compensation, while featuring better movement cosmesis when compared to standard controls

    Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals

    Get PDF
    The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results

    Development of a Wearable Sensor-Based Framework for the Classification and Quantification of High Knee Flexion Exposures in Childcare

    Get PDF
    Repetitive cyclic and prolonged joint loading in high knee flexion postures has been associated with the progression of degenerative knee joint diseases and knee osteoarthritis (OA). Despite this association, high flexion postures, where the knee angle exceeds 120°, are commonly performed within occupational settings. While work related musculoskeletal disorders have been studied across many occupations, the risk of OA development associated with the adoption of high knee flexion postures in childcare workers has until recently been unexplored; and therefore, occupational childcare has not appeared in any systematic reviews seeking to prove a causal relationship between occupational exposures and the risk of knee OA development. Therefore, the overarching goal of this thesis was to explore the adoption of high flexion postures in childcare settings and to develop a means by which these could be measured using non-laboratory-based technologies. The global objectives of this thesis were to (i) identify the postural demands of occupational childcare as they relate to high flexion exposures at the knee, (ii) apply, extend, and validate sensor to segment alignment algorithms through which lower limb flexion-extension kinematics could be measured in multiple high knee flexion postures using inertial measurement units (IMUs), and (iii) develop a machine learning based classification model capable of identifying each childcare-inspired high knee flexion posture. In-line with these global objectives, four independent studies were conducted.   Study I – Characterization of Postures of High Knee Flexion and Lifting Tasks Associated with Occupational Childcare Background: High knee flexion postures, despite their association with increased incidences of osteoarthritis, are frequently adopted in occupational childcare. High flexion exposure thresholds (based on exposure frequency or cumulative daily exposure) that relate to increased incidences of OA have previously been proposed; yet our understanding of how the specific postural requirements of this childcare compare to these thresholds remains limited. Objectives: This study sought to define and quantify high flexion postures typically adopted in childcare to evaluate any increased likelihood of knee osteoarthritis development. Methods: Video data of eighteen childcare workers caring for infant, toddler, and preschool-aged children over a period of approximately 3.25 hours were obtained for this investigation from a larger cohort study conducted across five daycares in Kingston, Ontario, Canada. Each video was segmented to identify the start and end of potential high knee flexion exposures. Each identified posture was quantified by duration and frequency. An analysis of postural adoption by occupational task was subsequently performed to determine which task(s) might pose the greatest risk for cumulative joint trauma. Results: A total of ten postures involving varying degrees of knee flexion were identified, of which 8 involved high knee flexion. Childcare workers caring for children of all ages were found to adopt high knee flexion postures for durations of 1.45±0.15 hours and frequencies of 128.67±21.45 over the 3.25 hour observation period, exceeding proposed thresholds for incidences of knee osteoarthritis development. Structured activities, playing, and feeding tasks were found to demand the greatest adoption of high flexion postures. Conclusions: Based on the findings of this study, it is likely that childcare workers caring for children of all ages exceed cumulative exposure- and frequency-based thresholds associated with increased incidences of knee OA development within a typical working day. Study II – Evaluating the Robustness of Automatic IMU Calibration for Lower Extremity Motion Analysis in High Knee Flexion Postures Background: While inertial measurement units promise an out- of-the-box, minimally intrusive means of objectively measuring body segment kinematics in any setting, in practice their implementation requires complex calculations in order to align each sensor with the coordinate system of the segment to which they are attached. Objectives: This study sought to apply and extend previously proposed alignment algorithms to align inertial sensors with the segments on which they are attached in order to calculate flexion-extension angles for the ankle, knee, and hip during multiple childcare-inspired postures. Methods: The Seel joint axis algorithm and the Constrained Seel Knee Axis (CSKA) algorithm were implemented for the sensor to segment calibration of acceleration and angular velocity data from IMUs mounted on the lower limbs and pelvis, based on a series of calibration movements about each joint. Further, the Iterative Seel spherical axis (ISSA) extension to this implementation was proposed for the calibration of sensors about the ankle and hip. The performance of these algorithms was validated across fifty participants during ten childcare-inspired movements performed by comparing IMU- and gold standard optical-based flexion-extension angle estimates. Results: Strong correlations between the IMU- and optical-based angle estimates were reported for all joints during each high flexion motion with the exception of a moderate correlation reported for the ankle angle estimate during child chair sitting. Mean RMSE between protocols were found to be 6.61° ± 2.96° for the ankle, 7.55° ± 5.82° for the knee, and 14.64° ± 6.73° for the hip. Conclusions: The estimation of joint kinematics through the IMU-based CSKA and ISSA algorithms presents an effective solution for the sensor to segment calibration of inertial sensors, allowing for the calculation of lower limb flexion-extension kinematics in multiple childcare-inspired high knee flexion postures. Study III – A Multi-Dimensional Dynamic Time Warping Distance-Based Framework for the Recognition of High Knee Flexion Postures in Inertial Sensor Data Background: The interpretation of inertial measures as they relate to occupational exposures is non-trivial. In order to relate the continuously collected data to the activities or postures performed by the sensor wearer, pattern recognition and machine learning based algorithms can be applied. One difficulty in applying these techniques to real-world data lies in the temporal and scale variability of human movements, which must be overcome when seeking to classify data in the time-domain. Objectives: The objective of this study was to develop a sensor-based framework for the detection and measurement of isolated childcare-specific postures (identified in Study I). As a secondary objective, the classification accuracy movements performed under loaded and unloaded conditions were compared in order to assess the sensitivity of the developed model to potential postural variabilities accompanying the presence of a load. Methods: IMU-based joint angle estimates for the ankle, knee, and hip were time and scale normalized prior to being input to a multi-dimensional Dynamic Time Warping (DTW) distance-based Nearest Neighbour algorithm for the identification of twelve childcare inspired postures. Fifty participants performed each posture, when possible, under unloaded and loaded conditions. Angle estimates from thirty-five participants were divided into development and testing data, such that 80% of the trials were segmented into movement templates and the remaining 20% were left as continuous movement sequences. These data were then included in the model building and testing phases while the accuracy of the model was validated based on novel data from fifteen participants. Results: Overall accuracies of 82.3% and 55.6% were reached when classifying postures on testing and validation data respectively. When adjusting for the imbalances between classification groups, mean balanced accuracies increased to 86% and 74.6% for testing and validation data respectively. Sensitivity and specificity values revealed the highest rates of misclassifications occurred between flatfoot squatting, heels-up squatting, and stooping. It was also found that the model was not capable of identifying sequences of walking data based on a single step motion template. No significant differences were found between the classification of loaded and unloaded motion trials. Conclusions: A combination of DTW distances calculated between motion templates and continuous movement sequences of lower limb flexion-extension angles was found to be effective in the identification of isolated postures frequently performed in childcare. The developed model was successful at classifying data from participants both included and precluded from the algorithm building dataset and insensitive to postural variability which might be caused by the presence of a load. Study IV – Evaluating the Feasibility of Applying the Developed Multi-Dimensional Dynamic Time Warping Distance-Based Framework to the Measurement and Recognition of High Knee Flexion Postures in a Simulated Childcare Environment Background: While the simulation of high knee flexion postures in isolation (in Study III) provided a basis for the development of a multi-dimensional Dynamic Time Warping based nearest neighbour algorithm for the identification of childcare-inspired postures, it is unlikely that the postures adopted in childcare settings would be performed in isolation. Objectives: This study sought to explore the feasibility of extending the developed classification algorithm to identify and measure postures frequently adopted when performing childcare specific tasks within a simulated childcare environment. Methods: Lower limb inertial motion data was recorded from twelve participants as they interacted with their child during a series of tasks inspired by those identified in Study I as frequently occurring in childcare settings. In order to reduce the error associated with gyroscopic drift over time, joint angles for each trial were calculated over 60 second increments and concatenated across the duration of each trial. Angle estimates from ten participants were time windowed in order to create the inputs for the development and testing of two model designs wherein: (A) the model development data included all templates generated from Study III as well as continuous motion windows here collected, or (B) only the model development data included only windows of continuous motion data. The division of data into the development and testing datasets for each 5-fold cross-validated classification model was performed in one of two ways wherein the data was divided: (a) through stratified randomized partitioning of windows such that 80% were assigned to model development and the remaining 20% were reserved for testing, or (b) by partitioning all windows from a single trial of a single participant for testing while all remaining windows were assigned to the model development dataset. When the classification of continuously collected windows was tested (using division strategy b), a logic-based correction module was introduced to eliminate any erroneous predictions. Each model design (A and B) was developed and tested using both data division strategies (a and b) and subsequently their performance was evaluated based on the classification of all data windows from the two subjects reserved for validation. Results: Classification accuracies of 42.2% and 42.5% were achieved when classifying the testing data separated through stratified random partitioning (division strategy a) using models that included (model A, 159 classes) or excluded (model B, 149 classes) the templates generated from Study III, respectively. This classification accuracy was found to decrease when classifying a test partition which included all windows of a single trial (division strategy b) to 35.4% when using model A (where templates from Study III were included in the model development dataset); however, this same trial was classified with an accuracy of 80.8% when using model B (whose development dataset included only windows of continuous motion data). This accuracy was however found to be highly dependent on the motions performed in a given trial and logic-based corrections were not found to improve classification accuracies. When validating each model by identifying postures performed by novel subjects, classification accuracies of 24.0% and 26.6% were obtained using development data which included (model A) and excluded (model B) templates from Study III, respectively. Across all novel data, the highest classification accuracies were observed when identifying static postures, which is unsurprising given that windows of these postures were most prevalent in the model development datasets. Conclusions: While classification accuracies above those achievable by chance were achieved, the classification models evaluated in this study were incapable of accurately identifying the postures adopted during simulated childcare tasks to a level that could be considered satisfactory to accurately report on the postures assumed in a childcare environment. The success of the classifier was highly dependent on the number of transitions occurring between postures while in high flexion; therefore, more classifier development data is needed to create templates for these novel transition movements. Given the high variability in postural adoption when caring for and interacting with children, additional movement templates based on continuously collected data would be required for the successful identification of postures in occupational settings. Global Conclusions Childcare workers exceed previously reported thresholds for high knee flexion postures based on cumulative exposure and frequency of adoption associated with increased incidences of knee OA development within a typical working day. Inertial measurement units provide a unique means of objectively measuring postures frequently adopted when caring for children which may ultimately permit the quantification of high knee flexion exposures in childcare settings and further study of the relationship between these postures and the risk of OA development in occupational childcare. While the results of this thesis demonstrate that IMU based measures of lower limb kinematics can be used to identify these postures in isolation, further work is required to expand the classification model and enable the identification of such postures from continuously collected data

    Interfaces de fala silenciosa multimodais para português europeu com base na articulação

    Get PDF
    Doutoramento conjunto MAPi em InformáticaThe concept of silent speech, when applied to Human-Computer Interaction (HCI), describes a system which allows for speech communication in the absence of an acoustic signal. By analyzing data gathered during different parts of the human speech production process, Silent Speech Interfaces (SSI) allow users with speech impairments to communicate with a system. SSI can also be used in the presence of environmental noise, and in situations in which privacy, confidentiality, or non-disturbance are important. Nonetheless, despite recent advances, performance and usability of Silent Speech systems still have much room for improvement. A better performance of such systems would enable their application in relevant areas, such as Ambient Assisted Living. Therefore, it is necessary to extend our understanding of the capabilities and limitations of silent speech modalities and to enhance their joint exploration. Thus, in this thesis, we have established several goals: (1) SSI language expansion to support European Portuguese; (2) overcome identified limitations of current SSI techniques to detect EP nasality (3) develop a Multimodal HCI approach for SSI based on non-invasive modalities; and (4) explore more direct measures in the Multimodal SSI for EP acquired from more invasive/obtrusive modalities, to be used as ground truth in articulation processes, enhancing our comprehension of other modalities. In order to achieve these goals and to support our research in this area, we have created a multimodal SSI framework that fosters leveraging modalities and combining information, supporting research in multimodal SSI. The proposed framework goes beyond the data acquisition process itself, including methods for online and offline synchronization, multimodal data processing, feature extraction, feature selection, analysis, classification and prototyping. Examples of applicability are provided for each stage of the framework. These include articulatory studies for HCI, the development of a multimodal SSI based on less invasive modalities and the use of ground truth information coming from more invasive/obtrusive modalities to overcome the limitations of other modalities. In the work here presented, we also apply existing methods in the area of SSI to EP for the first time, noting that nasal sounds may cause an inferior performance in some modalities. In this context, we propose a non-invasive solution for the detection of nasality based on a single Surface Electromyography sensor, conceivable of being included in a multimodal SSI.O conceito de fala silenciosa, quando aplicado a interação humano-computador, permite a comunicação na ausência de um sinal acústico. Através da análise de dados, recolhidos no processo de produção de fala humana, uma interface de fala silenciosa (referida como SSI, do inglês Silent Speech Interface) permite a utilizadores com deficiências ao nível da fala comunicar com um sistema. As SSI podem também ser usadas na presença de ruído ambiente, e em situações em que privacidade, confidencialidade, ou não perturbar, é importante. Contudo, apesar da evolução verificada recentemente, o desempenho e usabilidade de sistemas de fala silenciosa tem ainda uma grande margem de progressão. O aumento de desempenho destes sistemas possibilitaria assim a sua aplicação a áreas como Ambientes Assistidos. É desta forma fundamental alargar o nosso conhecimento sobre as capacidades e limitações das modalidades utilizadas para fala silenciosa e fomentar a sua exploração conjunta. Assim, foram estabelecidos vários objetivos para esta tese: (1) Expansão das linguagens suportadas por SSI com o Português Europeu; (2) Superar as limitações de técnicas de SSI atuais na deteção de nasalidade; (3) Desenvolver uma abordagem SSI multimodal para interação humano-computador, com base em modalidades não invasivas; (4) Explorar o uso de medidas diretas e complementares, adquiridas através de modalidades mais invasivas/intrusivas em configurações multimodais, que fornecem informação exata da articulação e permitem aumentar a nosso entendimento de outras modalidades. Para atingir os objetivos supramencionados e suportar a investigação nesta área procedeu-se à criação de uma plataforma SSI multimodal que potencia os meios para a exploração conjunta de modalidades. A plataforma proposta vai muito para além da simples aquisição de dados, incluindo também métodos para sincronização de modalidades, processamento de dados multimodais, extração e seleção de características, análise, classificação e prototipagem. Exemplos de aplicação para cada fase da plataforma incluem: estudos articulatórios para interação humano-computador, desenvolvimento de uma SSI multimodal com base em modalidades não invasivas, e o uso de informação exata com origem em modalidades invasivas/intrusivas para superar limitações de outras modalidades. No trabalho apresentado aplica-se ainda, pela primeira vez, métodos retirados do estado da arte ao Português Europeu, verificando-se que sons nasais podem causar um desempenho inferior de um sistema de fala silenciosa. Neste contexto, é proposta uma solução para a deteção de vogais nasais baseada num único sensor de eletromiografia, passível de ser integrada numa interface de fala silenciosa multimodal
    corecore