541 research outputs found

    ANALYSIS OF VOCAL FOLD KINEMATICS USING HIGH SPEED VIDEO

    Get PDF
    Vocal folds are the twin in-folding of the mucous membrane stretched horizontally across the larynx. They vibrate modulating the constant air flow initiated from the lungs. The pulsating pressure wave blowing through the glottis is thus the source for voiced speech production. Study of vocal fold dynamics during voicing are critical for the treatment of voice pathologies. Since the vocal folds move at 100 - 350 cycles per second, their visual inspection is currently done by strobosocopy which merges information from multiple cycles to present an apparent motion. High Speed Digital Laryngeal Imaging(HSDLI) with a temporal resolution of up to 10,000 frames per second has been established as better suited for assessing the vocal fold vibratory function through direct recording. But the widespread use of HSDLI is limited due to lack of consensus on the modalities like features to be examined. Development of the image processing techniques which circumvents the need for the tedious and time consuming effort of examining large volumes of recording has room for improvement. Fundamental questions like the required frame rate or resolution for the recordings is still not adequately answered. HSDLI cannot get the absolute physical measurement of the anatomical features and vocal fold displacement. This work addresses these challenges through improved signal processing. A vocal fold edge extraction technique with subpixel accuracy, suited even for hard to record pediatric population is developed first. The algorithm which is equally applicable for pediatric and adult subjects, is implemented to facilitate user inspection and intervention. Objective features describing the fold dynamics, which are extracted from the edge displacement waveform are proposed and analyzed on a diverse dataset of healthy males, females and children. The sampling and quantization noise present in the recordings are analyzed and methods to mitigate them are investigated. A customized Kalman smoothing and spline interpolation on the displacement waveform is found to improve the feature estimation stability. The relationship between frame rate, spatial resolution and vibration for efficient capturing of information is derived. Finally, to address the inability to measure physical measurement, a structured light projection calibrated with respect to the endoscope is prototyped

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Cable-driven parallel robot for transoral laser phonosurgery

    Get PDF
    Transoral laser phonosurgery (TLP) is a common surgical procedure in otolaryngology. Currently, two techniques are commonly used: free beam and fibre delivery. For free beam delivery, in combination with laser scanning techniques, accurate laser pattern scanning can be achieved. However, a line-of-sight to the target is required. A suspension laryngoscope is adopted to create a straight working channel for the scanning laser beam, which could introduce lesions to the patient, and the manipulability and ergonomics are poor. For the fibre delivery approach, a flexible fibre is used to transmit the laser beam, and the distal tip of the laser fibre can be manipulated by a flexible robotic tool. The issues related to the limitation of the line-of-sight can be avoided. However, the laser scanning function is currently lost in this approach, and the performance is inferior to that of the laser scanning technique in the free beam approach. A novel cable-driven parallel robot (CDPR), LaryngoTORS, has been developed for TLP. By using a curved laryngeal blade, a straight suspension laryngoscope will not be necessary to use, which is expected to be less traumatic to the patient. Semi-autonomous free path scanning can be executed, and high precision and high repeatability of the free path can be achieved. The performance has been verified in various bench and ex vivo tests. The technical feasibility of the LaryngoTORS robot for TLP was considered and evaluated in this thesis. The LaryngoTORS robot has demonstrated the potential to offer an acceptable and feasible solution to be used in real-world clinical applications of TLP. Furthermore, the LaryngoTORS robot can combine with fibre-based optical biopsy techniques. Experiments of probe-based confocal laser endomicroscopy (pCLE) and hyperspectral fibre-optic sensing were performed. The LaryngoTORS robot demonstrates the potential to be utilised to apply the fibre-based optical biopsy of the larynx.Open Acces

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Retainer-Free Optopalatographic Device Design and Evaluation as a Feedback Tool in Post-Stroke Speech and Swallowing Therapy

    Get PDF
    Stroke is one of the leading causes of long-term motor disability, including oro-facial impairments which affect speech and swallowing. Over the last decades, rehabilitation programs have evolved from utilizing mainly compensatory measures to focusing on recovering lost function. In the continuing effort to improve recovery, the concept of biofeedback has increasingly been leveraged to enhance self-efficacy, motivation and engagement during training. Although both speech and swallowing disturbances resulting from oro-facial impairments are frequent sequelae of stroke, efforts to develop sensing technologies that provide comprehensive and quantitative feedback on articulator kinematics and kinetics, especially those of the tongue, and specifically during post-stroke speech and swallowing therapy have been sparse. To that end, such a sensing device needs to accurately capture intraoral tongue motion and contact with the hard palate, which can then be translated into an appropriate form of feedback, without affecting tongue motion itself and while still being light-weight and portable. This dissertation proposes the use of an intraoral sensing principle known as optopalatography to provide such feedback while also exploring the design of optopalatographic devices itself for use in dysphagia and dysarthria therapy. Additionally, it presents an alternative means of holding the device in place inside the oral cavity with a newly developed palatal adhesive instead of relying on dental retainers, which previously limited device usage to a single person. The evaluation was performed on the task of automatically classifying different functional tongue exercises from one another with application in dysphagia therapy, whereas a phoneme recognition task was conducted with application in dysarthria therapy. Results on the palatal adhesive suggest that it is indeed a valid alternative to dental retainers when device residence time inside the oral cavity is limited to several tens of minutes per session, which is the case for dysphagia and dysarthria therapy. Functional tongue exercises were classified with approximately 61 % accuracy across subjects, whereas for the phoneme recognition task, tense vowels had the highest recognition rate, followed by lax vowels and consonants. In summary, retainer-free optopalatography has the potential to become a viable method for providing real-time feedback on tongue movements inside the oral cavity, but still requires further improvements as outlined in the remarks on future development.:1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Basics of post-stroke speech and swallowing therapy 2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13 2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Tongue motion sensing 3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24 3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29 4 Fundamentals of optopalatography 4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33 4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37 4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43 4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5 Intraoral device anchorage 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50 5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6 Initial device design with application in dysphagia therapy 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61 6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62 6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62 6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64 6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64 6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66 6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67 6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69 6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69 6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7 Improved device design with application in dysarthria therapy 7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 7.2.1 Optode placement and circuit board layout . . . . . . . . . . . . . . . . 87 7.2.2 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7.3 Device characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 7.3.1 Photodiode transient response . . . . . . . . . . . . . . . . . . . . . . . 91 7.3.2 Current source and rise time . . . . . . . . . . . . . . . . . . . . . . . . 91 7.3.3 Multiplexer switching speed . . . . . . . . . . . . . . . . . . . . . . . . . 92 7.3.4 Measurement cycle and firmware implementation . . . . . . . . . . . . . 93 7.3.5 In vitro measurement accuracy . . . . . . . . . . . . . . . . . . . . . . . 95 7.3.6 Optode measurement stability . . . . . . . . . . . . . . . . . . . . . . . 96 7.4 Evaluation on the phoneme recognition task . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Corpus selection and recording setup . . . . . . . . . . . . . . . . . . . . 98 7.4.2 Annotation and sensor data post-processing . . . . . . . . . . . . . . . . 98 7.4.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 99 7.4.4 Classifier and feature selection . . . . . . . . . . . . . . . . . . . . . . . 100 7.4.5 Evaluation paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.5.1 Tongue distance curve prediction . . . . . . . . . . . . . . . . . . . . . . 105 7.5.2 Tongue contact patterns and contours . . . . . . . . . . . . . . . . . . . 105 7.5.3 Phoneme recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8 Conclusion and future work 115 9 Appendix 9.1 Analytical light transport models . . . . . . . . . . . . . . . . . . . . . . . . . . 119 9.2 Meshed Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 9.3 Laser safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 9.4 Current source modulation voltage . . . . . . . . . . . . . . . . . . . . . . . . . 123 9.5 Transimpedance amplifier’s frequency responses . . . . . . . . . . . . . . . . . . 123 9.6 Initial OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . . . 127 9.7 Improved OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . 129 9.8 Test station layout drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Bibliography 152Der Schlaganfall ist eine der häufigsten Ursachen für motorische Langzeitbehinderungen, einschließlich solcher im Mund- und Gesichtsbereich, deren Folgen u.a. Sprech- und Schluckprobleme beinhalten, welche sich in den beiden Symptomen Dysarthrie und Dysphagie äußern. In den letzten Jahrzehnten haben sich Rehabilitationsprogramme für die Behandlung von motorisch ausgeprägten Schlaganfallsymptomatiken substantiell weiterentwickelt. So liegt nicht mehr die reine Kompensation von verlorengegangener motorischer Funktionalität im Vordergrund, sondern deren aktive Wiederherstellung. Dabei hat u.a. die Verwendung von sogenanntem Biofeedback vermehrt Einzug in die Therapie erhalten, um Motivation, Engagement und Selbstwahrnehmung von ansonsten unbewussten Bewegungsabläufen seitens der Patienten zu fördern. Obwohl jedoch Sprech- und Schluckstörungen eine der häufigsten Folgen eines Schlaganfalls darstellen, wird diese Tatsache nicht von der aktuellen Entwicklung neuer Geräte und Messmethoden für quantitatives und umfassendes Biofeedback reflektiert, insbesondere nicht für die explizite Erfassung intraoraler Zungenkinematik und -kinetik und für den Anwendungsfall in der Schlaganfalltherapie. Ein möglicher Grund dafür liegt in den sehr strikten Anforderungen an ein solche Messmethode: Sie muss neben Portabilität idealerweise sowohl den Kontakt zwischen der Zunge und dem Gaumen, als auch die dreidimensionale Bewegung der Zunge in der Mundhöhle erfassen, ohne dabei die Artikulation selbst zu beeinflussen. Um diesen Anforderungen gerecht zu werden, wird in dieser Dissertation das Messprinzip der Optopalatographie untersucht, mit dem Schwerpunkt auf der Anwendung in der Dysarthrie- und Dysphagietherapie. Dies beinhaltet auch die Entwicklung eines entsprechenden Gerätes sowie dessen Befestigungsmethode in der Mundhöhle über ein dediziertes Mundschleimhautadhäsiv. Letzteres umgeht das bisherige Problem der notwendigen Anpassung eines solchen intraoralen Gerätes an einen einzelnen Nutzer. Für die Anwendung in der Dysphagietherapie erfolgte die Evaluation anhand einer automatischen Erkennung von Mobilisationsübungen der Zunge, welche routinemäßig in der funktionalen Dysphagietherapie durchgeführt werden. Für die Anwendung in der Dysarthrietherapie wurde eine Lauterkennung durchgeführt. Die Resultate bezüglich der Verwendung des Mundschleimhautadhäsives suggerieren, dass dieses tatsächlich eine valide Alternative zu den bisher verwendeten Techniken zur Befestigung intraoraler Geräte in der Mundhöhle darstellt. Zungenmobilisationsübungen wurden über Probanden hinweg mit einer Rate von 61 % erkannt, wogegen in der Lauterkennung Langvokale die höchste Erkennungsrate erzielten, gefolgt von Kurzvokalen und Konsonanten. Zusammenfassend lässt sich konstatieren, dass das Prinzip der Optopalatographie eine ernstzunehmende Option für die intraorale Erfassung von Zungenbewegungen darstellt, wobei weitere Entwicklungsschritte notwendig sind, welche im Ausblick zusammengefasst sind.:1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Basics of post-stroke speech and swallowing therapy 2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13 2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Tongue motion sensing 3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24 3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29 4 Fundamentals of optopalatography 4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33 4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37 4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43 4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5 Intraoral device anchorage 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50 5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6 Initial device design with application in dysphagia therapy 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61 6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62 6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62 6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64 6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64 6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66 6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67 6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69 6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69 6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7 Improved device design with application in dysarthria therapy 7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . .

    Final Report to NSF of the Standards for Facial Animation Workshop

    Get PDF
    The human face is an important and complex communication channel. It is a very familiar and sensitive object of human perception. The facial animation field has increased greatly in the past few years as fast computer graphics workstations have made the modeling and real-time animation of hundreds of thousands of polygons affordable and almost commonplace. Many applications have been developed such as teleconferencing, surgery, information assistance systems, games, and entertainment. To solve these different problems, different approaches for both animation control and modeling have been developed

    Glottis Detection and Evaluation in High-Speed Video Recording

    Get PDF
    Tato práce shrnuje výsledky studia zabývajícího se hodnocením hlasivek na základě dat získaných ze záznamů pořízených laryngoskopickým systémem, konkrétně laryngeální vysokorychlostní videoendoskopií (Laryngeal High-Speed Videoendoscopy -- LHSV) Hlavním cílem této práce je zpracovat obrazovou informaci, která je obsažena ve videosekvencích LHSV, najít a detekovat hlasivkovou štěrbinu (glottis) zvolenými metodami segmentace obrazu a vyhodnotit kvalitu hlasivek analytickými a statistickými metodami s využitím definovaného souboru parametrů. První část této práce se zaměřuje na popis podstaty a struktury informace, která je získána pomocí systému LHSV. Proto je zde popsána anatomie hlasivek a fyziologie vzniku hlasu, to vše ve vztahu k informacím obsažených ve snímku v záznamu LHSV. Také jsou uvedeny základní typy onemocnění hlasivek a doplněn popis získávání dat, jejich struktura a poruchové jevy, které ovlivňují kvalitu záznamu LHSV. Dále je popsána problematika segmentace obrazu použitá na získaných obrazových datech z vyšetření pomoci LHSV a jsou shrnuté metody vyvinuté pro lokalizaci glottis, tzv. nalezení oblasti zájmu (Region of Interest -- ROI), samotnou segmentaci a výběr parametrů založených především na geometrii a symetrii hlasivek. Proces je demonstrován na několika kazuistikách. Důležitou částí práce je popis nových metod zabývajících se vypočítanými parametry a jejich vztahy pomocí korelační analýzy. Přístup založený na očekávaných a neočekávaných korelačních vztazích vyplývajících z podrobné analýzy může poskytnout základní hodnocení chování hlasivek. Další metody pak poskytují numerické hodnocení vývoje tvaru hlasivkové štěrbiny na základě statistické analýzy a expertního hodnocení. Výsledky jsou ilustrovány a vysvětleny.ObhájenoThis work summarizes the results of the study of vocal cords evaluation based on data extracted from recordings taken by a laryngoscopic system, specifically by Laryngeal High-Speed Videoendoscopy (LHSV). The main goal of this work is to process images contained in the recorded LHSV sequences, find and detect the vocal gap (glottis) using chosen image segmentation methods and evaluate the vocal cords' quality by analytical and statistical methods using a defined set of parameters. The first part of this thesis focuses on the description of the nature and structure of the information that is obtained using the LHSV system. Therefore, the anatomy of the vocal cords and the physiology of voice creation are described concerning the information included in the image in the LHSV recording. Also, the basic types of vocal cords diseases are listed and the data gathering, structure, and problems affecting the quality of the LHSV recording are described. Furthermore, issues of image segmentation used on laryngoscopical image data taken from Laryngeal High-Speed Videoendoscopy are delineated together with a description of the developed method for glottis localization (finding ROI), segmentation, and parameter selection mainly based on geometry and glottis symmetry. The process is demonstrated in several case studies. The important part of the work contains a description of new methods dealing with computed parameters and their relationships using correlation analysis. An approach based on expected and unexpected correlation relations resulting from the detailed analysis can provide a basic evaluation of the vocal cords' behavior. Other methods then provide a numeric evaluation of the glottis shape development based on statistical analysis and rating from the experts' examinations. The results are illustrated and explained

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoãoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf
    corecore