102 research outputs found
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Evaluating EEG–EMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation
Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices.
One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEG–EMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEG–EMG fusion and to develop a novel control system based on the incorporation of EEG–EMG fusion classifiers.
A dataset of EEG and EMG signals were collected during dynamic elbow flexion–extension motions and used to develop EEG–EMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEG–EMG fusion can classify more indirect tasks.
A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEG–EMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEG–EMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation
Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback
The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships
Proceedings XXIII Congresso SIAMOC 2023
Il congresso annuale della Società Italiana di Analisi del Movimento in Clinica (SIAMOC), giunto quest’anno alla sua ventitreesima edizione, approda nuovamente a Roma.
Il congresso SIAMOC, come ogni anno, è l’occasione per tutti i professionisti che operano nell’ambito dell’analisi del movimento di incontrarsi, presentare i risultati delle proprie ricerche e rimanere aggiornati sulle più recenti innovazioni riguardanti le procedure e le tecnologie per l’analisi del movimento nella pratica clinica.
Il congresso SIAMOC 2023 di Roma si propone l’obiettivo di fornire ulteriore impulso ad una già eccellente attività di ricerca italiana nel settore dell’analisi del movimento e di conferirle ulteriore respiro ed impatto internazionale.
Oltre ai qualificanti temi tradizionali che riguardano la ricerca di base e applicata in ambito clinico e sportivo, il congresso SIAMOC 2023 intende approfondire ulteriori tematiche di particolare interesse scientifico e di impatto sulla società . Tra questi temi anche quello dell’inserimento lavorativo di persone affette da disabilità anche grazie alla diffusione esponenziale in ambito clinico-occupazionale delle tecnologie robotiche collaborative e quello della protesica innovativa a supporto delle persone con amputazione. Verrà infine affrontato il tema dei nuovi algoritmi di intelligenza artificiale per l’ottimizzazione della classificazione in tempo reale dei pattern motori nei vari campi di applicazione
Variational Autoencoder and Sensor Fusion for Robust Myoelectric Controls
Myoelectric control schemes aim to utilize the surface electromyography (EMG) signals which are the electric potentials directly measured from skeletal muscles to control wearable robots such as exoskeletons and prostheses. The main challenge of myoelectric controls is to increase and preserve the signal quality by minimizing the effect of confounding factors such as muscle fatigue or electrode shift. Current research in myoelectric control schemes are developed to work in ideal laboratory conditions, but there is a persistent need to have these control schemes be more robust and work in real-world environments. Following the manifold hypothesis, complexity in the world can be broken down from a high-dimensional space to a lower-dimensional form or representation that can explain how the higher-dimensional real world operates. From this premise, the biological actions and their relevant multimodal signals can be compressed and optimally pertinent when performed in both laboratory and non-laboratory settings once the learned representation or manifold is discovered. This thesis outlines a method that incorporates the use of a contrastive variational autoencoder with an integrated classifier on multimodal sensor data to create a compressed latent space representation that can be used in future myoelectric control schemes
Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design
Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
Instance-based Learning with Prototype Reduction for Real-Time Proportional Myocontrol: A Randomized User Study Demonstrating Accuracy-preserving Data Reduction for Prosthetic Embedded Systems
This work presents the design, implementation and validation of learning
techniques based on the kNN scheme for gesture detection in prosthetic control.
To cope with high computational demands in instance-based prediction, methods
of dataset reduction are evaluated considering real-time determinism to allow
for the reliable integration into battery-powered portable devices. The
influence of parameterization and varying proportionality schemes is analyzed,
utilizing an eight-channel-sEMG armband. Besides offline cross-validation
accuracy, success rates in real-time pilot experiments (online target
achievement tests) are determined. Based on the assessment of specific dataset
reduction techniques' adequacy for embedded control applications regarding
accuracy and timing behaviour, Decision Surface Mapping (DSM) proves itself
promising when applying kNN on the reduced set. A randomized, double-blind user
study was conducted to evaluate the respective methods (kNN and kNN with
DSM-reduction) against Ridge Regression (RR) and RR with Random Fourier
Features (RR-RFF). The kNN-based methods performed significantly better
(p<0.0005) than the regression techniques. Between DSM-kNN and kNN, there was
no statistically significant difference (significance level 0.05). This is
remarkable in consideration of only one sample per class in the reduced set,
thus yielding a reduction rate of over 99% while preserving success rate. The
same behaviour could be confirmed in an extended user study. With k=1, which
turned out to be an excellent choice, the runtime complexity of both kNN (in
every prediction step) as well as DSM-kNN (in the training phase) becomes
linear concerning the number of original samples, favouring dependable wearable
prosthesis applications
Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis
Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness.
Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks.
Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience.
Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice
Retainer-Free Optopalatographic Device Design and Evaluation as a Feedback Tool in Post-Stroke Speech and Swallowing Therapy
Stroke is one of the leading causes of long-term motor disability, including oro-facial impairments which affect speech and swallowing. Over the last decades, rehabilitation programs have evolved from utilizing mainly compensatory measures to focusing on recovering lost function. In the continuing effort to improve recovery, the concept of biofeedback has increasingly been leveraged to enhance self-efficacy, motivation and engagement during training. Although both speech and swallowing disturbances resulting from oro-facial impairments are frequent sequelae of stroke, efforts to develop sensing technologies that provide comprehensive and quantitative feedback on articulator kinematics and kinetics, especially those of the tongue, and specifically during post-stroke speech and swallowing therapy have been sparse. To that end, such a sensing device needs to accurately capture intraoral tongue motion and contact with the hard palate, which can then be translated into an appropriate form of feedback, without affecting tongue motion itself and while still being light-weight and portable. This dissertation proposes the use of an intraoral sensing principle known as optopalatography to provide such feedback while also exploring the design of optopalatographic devices itself for use in dysphagia and dysarthria therapy. Additionally, it presents an alternative means of holding the device in place inside the oral cavity with a newly developed palatal adhesive instead of relying on dental retainers, which previously limited device usage to a single person. The evaluation was performed on the task of automatically classifying different functional tongue exercises from one another with application in dysphagia therapy, whereas a phoneme recognition task was conducted with application in dysarthria therapy. Results on the palatal adhesive suggest that it is indeed a valid alternative to dental retainers when device residence time inside the oral cavity is limited to several tens of minutes per session, which is the case for dysphagia and dysarthria therapy. Functional tongue exercises were classified with approximately 61 % accuracy across subjects, whereas for the phoneme recognition task, tense vowels had the highest recognition rate, followed by lax vowels and consonants. In summary, retainer-free optopalatography has the potential to become a viable method for providing real-time feedback on tongue movements inside the oral cavity, but still requires further improvements as outlined in the remarks on future development.:1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Basics of post-stroke speech and swallowing therapy
2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13
2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Tongue motion sensing
3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24
3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29
4 Fundamentals of optopalatography
4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33
4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37
4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43
4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5 Intraoral device anchorage
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48
5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50
5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Initial device design with application in dysphagia therapy
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61
6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62
6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62
6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64
6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64
6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66
6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69
6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69
6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7 Improved device design with application in dysarthria therapy
7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.2.1 Optode placement and circuit board layout . . . . . . . . . . . . . . . . 87
7.2.2 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.3 Device characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.3.1 Photodiode transient response . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.2 Current source and rise time . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.3 Multiplexer switching speed . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3.4 Measurement cycle and firmware implementation . . . . . . . . . . . . . 93
7.3.5 In vitro measurement accuracy . . . . . . . . . . . . . . . . . . . . . . . 95
7.3.6 Optode measurement stability . . . . . . . . . . . . . . . . . . . . . . . 96
7.4 Evaluation on the phoneme recognition task . . . . . . . . . . . . . . . . . . . . 98
7.4.1 Corpus selection and recording setup . . . . . . . . . . . . . . . . . . . . 98
7.4.2 Annotation and sensor data post-processing . . . . . . . . . . . . . . . . 98
7.4.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 99
7.4.4 Classifier and feature selection . . . . . . . . . . . . . . . . . . . . . . . 100
7.4.5 Evaluation paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.5.1 Tongue distance curve prediction . . . . . . . . . . . . . . . . . . . . . . 105
7.5.2 Tongue contact patterns and contours . . . . . . . . . . . . . . . . . . . 105
7.5.3 Phoneme recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8 Conclusion and future work 115
9 Appendix
9.1 Analytical light transport models . . . . . . . . . . . . . . . . . . . . . . . . . . 119
9.2 Meshed Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
9.3 Laser safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
9.4 Current source modulation voltage . . . . . . . . . . . . . . . . . . . . . . . . . 123
9.5 Transimpedance amplifier’s frequency responses . . . . . . . . . . . . . . . . . . 123
9.6 Initial OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . . . 127
9.7 Improved OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . 129
9.8 Test station layout drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Bibliography 152Der Schlaganfall ist eine der häufigsten Ursachen für motorische Langzeitbehinderungen, einschließlich solcher im Mund- und Gesichtsbereich, deren Folgen u.a. Sprech- und Schluckprobleme beinhalten, welche sich in den beiden Symptomen Dysarthrie und Dysphagie äußern.
In den letzten Jahrzehnten haben sich Rehabilitationsprogramme für die Behandlung von motorisch ausgeprägten Schlaganfallsymptomatiken substantiell weiterentwickelt. So liegt nicht mehr die reine Kompensation von verlorengegangener motorischer Funktionalität im Vordergrund, sondern deren aktive Wiederherstellung. Dabei hat u.a. die Verwendung von sogenanntem Biofeedback vermehrt Einzug in die Therapie erhalten, um Motivation, Engagement und Selbstwahrnehmung von ansonsten unbewussten Bewegungsabläufen seitens der Patienten zu fördern. Obwohl jedoch Sprech- und Schluckstörungen eine der häufigsten Folgen eines Schlaganfalls darstellen, wird diese Tatsache nicht von der aktuellen Entwicklung neuer Geräte und Messmethoden für quantitatives und umfassendes Biofeedback reflektiert, insbesondere nicht für die explizite Erfassung intraoraler Zungenkinematik und -kinetik und für den Anwendungsfall in der Schlaganfalltherapie. Ein möglicher Grund dafür liegt in den sehr strikten Anforderungen an ein solche Messmethode: Sie muss neben Portabilität idealerweise sowohl den Kontakt zwischen der Zunge und dem Gaumen, als auch die dreidimensionale Bewegung der Zunge in der Mundhöhle erfassen, ohne dabei die Artikulation selbst zu beeinflussen. Um diesen Anforderungen gerecht zu werden, wird in dieser Dissertation das Messprinzip der Optopalatographie untersucht, mit dem Schwerpunkt auf der Anwendung in der Dysarthrie- und Dysphagietherapie. Dies beinhaltet auch die Entwicklung eines entsprechenden Gerätes sowie dessen Befestigungsmethode in der Mundhöhle über ein dediziertes Mundschleimhautadhäsiv.
Letzteres umgeht das bisherige Problem der notwendigen Anpassung eines solchen intraoralen Gerätes an einen einzelnen Nutzer. Für die Anwendung in der Dysphagietherapie erfolgte die Evaluation anhand einer automatischen Erkennung von Mobilisationsübungen der Zunge, welche routinemäßig in der funktionalen Dysphagietherapie durchgeführt werden. Für die Anwendung in der Dysarthrietherapie wurde eine Lauterkennung durchgeführt. Die Resultate
bezüglich der Verwendung des Mundschleimhautadhäsives suggerieren, dass dieses tatsächlich eine valide Alternative zu den bisher verwendeten Techniken zur Befestigung intraoraler Geräte in der Mundhöhle darstellt. Zungenmobilisationsübungen wurden über Probanden hinweg mit einer Rate von 61 % erkannt, wogegen in der Lauterkennung Langvokale die höchste Erkennungsrate erzielten, gefolgt von Kurzvokalen und Konsonanten. Zusammenfassend lässt sich konstatieren, dass das Prinzip der Optopalatographie eine ernstzunehmende Option für die intraorale Erfassung von Zungenbewegungen darstellt, wobei weitere Entwicklungsschritte notwendig sind, welche im Ausblick zusammengefasst sind.:1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Basics of post-stroke speech and swallowing therapy
2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13
2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Tongue motion sensing
3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24
3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29
4 Fundamentals of optopalatography
4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33
4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37
4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43
4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5 Intraoral device anchorage
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48
5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50
5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Initial device design with application in dysphagia therapy
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61
6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62
6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62
6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64
6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64
6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66
6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69
6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69
6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7 Improved device design with application in dysarthria therapy
7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . .
Ferroelectrets: from material science to energy harvesting and sensor applications
The purpose of this thesis is to develop innovative ferroelectrets that can be used in energy harvesting devices as well as mechanical sensors. In the first stage, the focus lies on the application of ferroelectrets as energy harvesters. The inability to control the environment where the energy harvesters will be applied, requires the use of materials that can be utilized in harsh environment such as high temperature or humidity. Therefore, new ferroelectrets based on polymers with excellent electret properties, such as fluoroethylene propylene (FEP) are developed. Two types of ferroelectrets are considered, one optimized for the longitidunal piezoelectric effect and the other one optimized for the transverse piezoelectric effect in these materials. Hereby, new void structures are achieved through thermally fusing such films so that parallel tunnels (parallel-tunnel ferroelectrets) are formed between them, or by fusing round-section FEP tubes together so that they form a band or membrane. The FEP tube configuration is optimized based on a finite element model showing that implementing a single tube structure (25 mm × 1.5 mm) as the energy harvester exhibits the largest output power. By building the energy harvester and modeling it analytically, it is demonstrated that the generated power is highly dependent on parameters such as wall thickness, load resistance, and seismic mass. Utilizing a seismic mass of 80 g at resonance frequencies around 80 Hz and an input acceleration of 1 g (9.81 m s−2), output powers up to 300 μW are reached for a transducer with 25 μm thick walls. The parallel-tunnel ferroelectrets (40 mm × 10 mm) are characterized and used in an energy harvester device based on the transverse piezoelectric effect. The energy harvesting device is an air-spaced cantilever arrangement produced by additive manufacturing technique (3D-printing). The device is tested by exposing it to sinusoidal vibrations with an acceleration a, generated by a shaker. By placing the ferroelectret at a defined distance from the neutral axis of the cantilever beam and using a proper pre-stress of the ferroelectret, an output power exceeding 1000 μW at the resonance frequency of approximately 35 Hz is reached. This demonstrates a significant improvement of air-spaced vibrational energy harvesting with ferroelectrets and greatly exceeds previous performance data for ferroelectret energy harvester of maximal 230 μW. In the second stage of the dissertation, the focus is shifted to develop ferroelectrets for chosen applications such as force myography, ultrasonic transducer and smart insole. Hereby, new arrangements and manufacturing methods are investigated to build the ferroelectret sensors. Furthermore, and following the recent requirements of eco-friendlier sensors, ferroelectrets based on polylactic acid (PLA) are investigated. PLA is a biodegradable and bioabsorbable material derived from renewable plant sources, such as corn or potato starch, tapioca roots, and sugar canes. This work relays a promising new technique in the fabrication of ferroelectrets. The novel structure is achieved through sandwiching a 3D-printed grid of periodically spaced thermoplastic polyurethane (TPU) spacers and air channels between two 12.5 μm-thick FEP films. Due to the ultra-soft TPU sections, very high quasistatic (22.000 pC N−1) and dynamic (7500 pC N−1) d33-coefficients are achieved. The isothermal stability of the d33-coefficients showed a strong dependence on poling temperature. Furthermore, the thermally stimulated discharge currents revealed well-known instability of positive charge carriers in FEP, thereby offering the possibility of stabilization by high-temperature poling. A similar approach is taken by replacing the environmentally harmful FEP by PLA. Large piezoelectric d33-coefficients of up to 2850 pC N−1 are recorded directly after charging and stabilized at about 1500 pC N−1 after approximately 50 days under ambient environmental conditions. These ferroelectrets when used for force myography to detect the slightest muscle movement when moving a finger, resulted in signal shapes and magnitudes that can be clearly distinguished from each other using simple machine learning algorithms known as Support Vector Machine (SVM) with a classification accuracy of 89.5%. Following the new manufacturing route using 3D-printing, an insole is printed using pure polypropylene filament and consists of eight independent sensors, each with a piezoelectric d33 coefficient of approximately 2000 pC N−1. The active part of the insole is protected using a 3D-printed PLA cover that features eight defined embossments on the bottom part, which focus the force on the sensors and act as overload protection against excessive stress. In addition to determining the gait pattern, an accelerometer is implemented to measure kinematic parameters and validate the sensor output signals. The combination of the high sensitivity of the sensors and the kinematic movement of the foot, opens new perspectives regarding diagnosis possibilities through gait analysis. By 3D-printing a PLA backplate and using it in combination with a bulk PLA film, a new possibility to build ultrasonic transducers is presented. The ultrasonic transducer consists of three main components all made from PLA: the film presenting the vibrating plate, the printed backplate with well-defined groves, and the printed holder. The PLA film and the printed backplate build together the ferroelectret with artificial air voids. The printed holder clamps the film on the backplate and fixes the ferroelectret together. The resulting sound pressure is measured with a calibrated microphone (Type 4138, Bruel & Kjaer) at a distance of 30 cm. The biodegradable ultrasonic transducer exhibits a large bandwidth of approximately 45 kHz and fractional bandwidth of 70%. The resulting sound pressure at the resonance frequency can be increased from 98 dB up to 106 dB for driving voltages from 30 to 70 V. respectively. The obtained theoretical and experimental results are an excellent base for further optimizing ferroelectrets to be accepted in the field of energy harvesting and mechanical sensors, where flexibility and high sensitivity are mandatory for the applications
- …