161 research outputs found

    SVM Based Indoor/Mixed/Outdoor Classification for Digital Photo Annotation in a Ubiquitous Computing Environment

    Get PDF
    This paper extends our previous framework for digital photo annotation by adding noble approach of indoor/mixed/outdoor image classification. We propose the best feature vectors for a support vector machine based indoor/mixed/ outdoor image classification. While previous research classifies photographs into indoor and outdoor, this study extends into three types, including indoor, mixed, and outdoor classes. This three-class method improves the performance of outdoor classification. This classification scheme showed 5--10% higher performance than previous research. This method is one of the components for digital image annotation. A digital camera or an annotation server connected to a ubiquitous computing network can automatically annotate captured photos using the proposed method

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    AN OBJECT-BASED MULTIMEDIA FORENSIC ANALYSIS TOOL

    Get PDF
    With the enormous increase in the use and volume of photographs and videos, multimedia-based digital evidence now plays an increasingly fundamental role in criminal investigations. However, with the increase, it is becoming time-consuming and costly for investigators to analyse content manually. Within the research community, focus on multimedia content has tended to be on highly specialised scenarios such as tattoo identification, number plate recognition, and child exploitation. An investigator’s ability to search multimedia data based on keywords (an approach that already exists within forensic tools for character-based evidence) could provide a simple and effective approach for identifying relevant imagery. This thesis proposes and demonstrates the value of using a multi-algorithmic approach via fusion to achieve the best image annotation performance. The results show that from existing systems, the highest average recall was achieved by Imagga with 53% while the proposed multi-algorithmic system achieved 77% across the select datasets. Subsequently, a novel Object-based Multimedia Forensic Analysis Tool (OM-FAT) architecture was proposed. The OM-FAT automates the identification and extraction of annotation-based evidence from multimedia content. Besides making multimedia data searchable, the OM-FAT system enables investigators to perform various forensic analyses (search using annotations, metadata, object matching, text similarity and geo-tracking) to help investigators understand the relationship between artefacts, thus reducing the time taken to perform an investigation and the investigator’s cognitive load. It will enable investigators to ask higher-level and more abstract questions of the data, then find answers to the essential questions in the investigation: what, who, why, how, when, and where. The research includes a detailed illustration of the architectural requirements, engines, and complete design of the system workflow, which represents a full case management system. To highlight the ease of use and demonstrate the system’s ability to correlate between multimedia, a prototype was developed. The prototype integrates the functionalities of the OM-FAT tool and demonstrates how the system would help digital investigators find pieces of evidence among a large number of images starting from the acquisition stage and ending in the reporting stage with less effort and in less time.The Higher Committee for Education Development in Iraq (HCED

    Methodology and Algorithms for Pedestrian Network Construction

    Get PDF
    With the advanced capabilities of mobile devices and the success of car navigation systems, interest in pedestrian navigation systems is on the rise. A critical component of any navigation system is a map database which represents a network (e.g., road networks in car navigation systems) and supports key functionality such as map display, geocoding, and routing. Road networks, mainly due to the popularity of car navigation systems, are well defined and publicly available. However, in pedestrian navigation systems, as well as other applications including urban planning and physical activities studies, road networks do not adequately represent the paths that pedestrians usually travel. Currently, there are no techniques to automatically construct pedestrian networks, impeding research and development of applications requiring pedestrian data. This coupled with the increased demand for pedestrian networks is the prime motivation for this dissertation which is focused on development of a methodology and algorithms that can construct pedestrian networks automatically. A methodology, which involves three independent approaches, network buffering (using existing road networks), collaborative mapping (using GPS traces collected by volunteers), and image processing (using high-resolution satellite and laser imageries) was developed. Experiments were conducted to evaluate the pedestrian networks constructed by these approaches with a pedestrian network baseline as a ground truth. The results of the experiments indicate that these three approaches, while differing in complexity and outcome, are viable for automatically constructing pedestrian networks

    Building and exploiting context on the web

    Get PDF
    [no abstract

    Retainer-Free Optopalatographic Device Design and Evaluation as a Feedback Tool in Post-Stroke Speech and Swallowing Therapy

    Get PDF
    Stroke is one of the leading causes of long-term motor disability, including oro-facial impairments which affect speech and swallowing. Over the last decades, rehabilitation programs have evolved from utilizing mainly compensatory measures to focusing on recovering lost function. In the continuing effort to improve recovery, the concept of biofeedback has increasingly been leveraged to enhance self-efficacy, motivation and engagement during training. Although both speech and swallowing disturbances resulting from oro-facial impairments are frequent sequelae of stroke, efforts to develop sensing technologies that provide comprehensive and quantitative feedback on articulator kinematics and kinetics, especially those of the tongue, and specifically during post-stroke speech and swallowing therapy have been sparse. To that end, such a sensing device needs to accurately capture intraoral tongue motion and contact with the hard palate, which can then be translated into an appropriate form of feedback, without affecting tongue motion itself and while still being light-weight and portable. This dissertation proposes the use of an intraoral sensing principle known as optopalatography to provide such feedback while also exploring the design of optopalatographic devices itself for use in dysphagia and dysarthria therapy. Additionally, it presents an alternative means of holding the device in place inside the oral cavity with a newly developed palatal adhesive instead of relying on dental retainers, which previously limited device usage to a single person. The evaluation was performed on the task of automatically classifying different functional tongue exercises from one another with application in dysphagia therapy, whereas a phoneme recognition task was conducted with application in dysarthria therapy. Results on the palatal adhesive suggest that it is indeed a valid alternative to dental retainers when device residence time inside the oral cavity is limited to several tens of minutes per session, which is the case for dysphagia and dysarthria therapy. Functional tongue exercises were classified with approximately 61 % accuracy across subjects, whereas for the phoneme recognition task, tense vowels had the highest recognition rate, followed by lax vowels and consonants. In summary, retainer-free optopalatography has the potential to become a viable method for providing real-time feedback on tongue movements inside the oral cavity, but still requires further improvements as outlined in the remarks on future development.:1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Basics of post-stroke speech and swallowing therapy 2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13 2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Tongue motion sensing 3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24 3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29 4 Fundamentals of optopalatography 4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33 4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37 4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43 4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5 Intraoral device anchorage 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50 5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6 Initial device design with application in dysphagia therapy 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61 6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62 6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62 6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64 6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64 6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66 6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67 6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69 6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69 6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7 Improved device design with application in dysarthria therapy 7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 7.2.1 Optode placement and circuit board layout . . . . . . . . . . . . . . . . 87 7.2.2 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7.3 Device characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 7.3.1 Photodiode transient response . . . . . . . . . . . . . . . . . . . . . . . 91 7.3.2 Current source and rise time . . . . . . . . . . . . . . . . . . . . . . . . 91 7.3.3 Multiplexer switching speed . . . . . . . . . . . . . . . . . . . . . . . . . 92 7.3.4 Measurement cycle and firmware implementation . . . . . . . . . . . . . 93 7.3.5 In vitro measurement accuracy . . . . . . . . . . . . . . . . . . . . . . . 95 7.3.6 Optode measurement stability . . . . . . . . . . . . . . . . . . . . . . . 96 7.4 Evaluation on the phoneme recognition task . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Corpus selection and recording setup . . . . . . . . . . . . . . . . . . . . 98 7.4.2 Annotation and sensor data post-processing . . . . . . . . . . . . . . . . 98 7.4.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 99 7.4.4 Classifier and feature selection . . . . . . . . . . . . . . . . . . . . . . . 100 7.4.5 Evaluation paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.5.1 Tongue distance curve prediction . . . . . . . . . . . . . . . . . . . . . . 105 7.5.2 Tongue contact patterns and contours . . . . . . . . . . . . . . . . . . . 105 7.5.3 Phoneme recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8 Conclusion and future work 115 9 Appendix 9.1 Analytical light transport models . . . . . . . . . . . . . . . . . . . . . . . . . . 119 9.2 Meshed Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 9.3 Laser safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 9.4 Current source modulation voltage . . . . . . . . . . . . . . . . . . . . . . . . . 123 9.5 Transimpedance amplifier’s frequency responses . . . . . . . . . . . . . . . . . . 123 9.6 Initial OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . . . 127 9.7 Improved OPG device’s PCB layout and circuit diagrams . . . . . . . . . . . . 129 9.8 Test station layout drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Bibliography 152Der Schlaganfall ist eine der häufigsten Ursachen für motorische Langzeitbehinderungen, einschließlich solcher im Mund- und Gesichtsbereich, deren Folgen u.a. Sprech- und Schluckprobleme beinhalten, welche sich in den beiden Symptomen Dysarthrie und Dysphagie äußern. In den letzten Jahrzehnten haben sich Rehabilitationsprogramme für die Behandlung von motorisch ausgeprägten Schlaganfallsymptomatiken substantiell weiterentwickelt. So liegt nicht mehr die reine Kompensation von verlorengegangener motorischer Funktionalität im Vordergrund, sondern deren aktive Wiederherstellung. Dabei hat u.a. die Verwendung von sogenanntem Biofeedback vermehrt Einzug in die Therapie erhalten, um Motivation, Engagement und Selbstwahrnehmung von ansonsten unbewussten Bewegungsabläufen seitens der Patienten zu fördern. Obwohl jedoch Sprech- und Schluckstörungen eine der häufigsten Folgen eines Schlaganfalls darstellen, wird diese Tatsache nicht von der aktuellen Entwicklung neuer Geräte und Messmethoden für quantitatives und umfassendes Biofeedback reflektiert, insbesondere nicht für die explizite Erfassung intraoraler Zungenkinematik und -kinetik und für den Anwendungsfall in der Schlaganfalltherapie. Ein möglicher Grund dafür liegt in den sehr strikten Anforderungen an ein solche Messmethode: Sie muss neben Portabilität idealerweise sowohl den Kontakt zwischen der Zunge und dem Gaumen, als auch die dreidimensionale Bewegung der Zunge in der Mundhöhle erfassen, ohne dabei die Artikulation selbst zu beeinflussen. Um diesen Anforderungen gerecht zu werden, wird in dieser Dissertation das Messprinzip der Optopalatographie untersucht, mit dem Schwerpunkt auf der Anwendung in der Dysarthrie- und Dysphagietherapie. Dies beinhaltet auch die Entwicklung eines entsprechenden Gerätes sowie dessen Befestigungsmethode in der Mundhöhle über ein dediziertes Mundschleimhautadhäsiv. Letzteres umgeht das bisherige Problem der notwendigen Anpassung eines solchen intraoralen Gerätes an einen einzelnen Nutzer. Für die Anwendung in der Dysphagietherapie erfolgte die Evaluation anhand einer automatischen Erkennung von Mobilisationsübungen der Zunge, welche routinemäßig in der funktionalen Dysphagietherapie durchgeführt werden. Für die Anwendung in der Dysarthrietherapie wurde eine Lauterkennung durchgeführt. Die Resultate bezüglich der Verwendung des Mundschleimhautadhäsives suggerieren, dass dieses tatsächlich eine valide Alternative zu den bisher verwendeten Techniken zur Befestigung intraoraler Geräte in der Mundhöhle darstellt. Zungenmobilisationsübungen wurden über Probanden hinweg mit einer Rate von 61 % erkannt, wogegen in der Lauterkennung Langvokale die höchste Erkennungsrate erzielten, gefolgt von Kurzvokalen und Konsonanten. Zusammenfassend lässt sich konstatieren, dass das Prinzip der Optopalatographie eine ernstzunehmende Option für die intraorale Erfassung von Zungenbewegungen darstellt, wobei weitere Entwicklungsschritte notwendig sind, welche im Ausblick zusammengefasst sind.:1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Scope and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Basics of post-stroke speech and swallowing therapy 2.1 Dysarthria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Dysphagia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Treatment rationale and potential of biofeedback . . . . . . . . . . . . . . . . . 13 2.4 Summary and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Tongue motion sensing 3.1 Contact-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Electropalatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2 Manometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Capacitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Non-contact based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Electromagnetic articulography . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2 Permanent magnetic articulography . . . . . . . . . . . . . . . . . . . . 24 3.2.3 Optopalatography (related work) . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Electro-optical stomatography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4 Extraoral sensing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Summary, comparison and conclusion . . . . . . . . . . . . . . . . . . . . . . . 29 4 Fundamentals of optopalatography 4.1 Important radiometric quantities . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1 Solid angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.2 Radiant flux and radiant intensity . . . . . . . . . . . . . . . . . . . . . 33 4.1.3 Irradiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.4 Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Sensing principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.1 Analytical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2.2 Monte Carlo ray tracing methods . . . . . . . . . . . . . . . . . . . . . . 37 4.2.3 Data-driven models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2.4 Model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 A priori device design consideration . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.1 Optoelectronic components . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.3.2 Additional electrical components and requirements . . . . . . . . . . . . 43 4.3.3 Intraoral sensor layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5 Intraoral device anchorage 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.1.1 Mucoadhesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.1.2 Considerations for the palatal adhesive . . . . . . . . . . . . . . . . . . . 48 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.1 Polymer selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2.2 Fabrication method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.2.3 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.4 PEO tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.5 Connection to the intraoral sensor’s encapsulation . . . . . . . . . . . . 50 5.2.6 Formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.1 Initial formulation evaluation . . . . . . . . . . . . . . . . . . . . . . . . 54 5.3.2 Final OPG adhesive formulation . . . . . . . . . . . . . . . . . . . . . . 56 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6 Initial device design with application in dysphagia therapy 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 6.2 Optode and optical sensor selection . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2.1 Optode and optical sensor evaluation procedure . . . . . . . . . . . . . . 61 6.2.2 Selected optical sensor characterization . . . . . . . . . . . . . . . . . . 62 6.2.3 Mapping from counts to millimeter . . . . . . . . . . . . . . . . . . . . . 62 6.2.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.3 Device design and hardware implementation . . . . . . . . . . . . . . . . . . . . 64 6.3.1 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.3.2 Optode placement and circuit board dimensions . . . . . . . . . . . . . 64 6.3.3 Firmware description and measurement cycle . . . . . . . . . . . . . . . 66 6.3.4 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6.3.5 Fully assembled OPG device . . . . . . . . . . . . . . . . . . . . . . . . 67 6.4 Evaluation on the gesture recognition task . . . . . . . . . . . . . . . . . . . . . 69 6.4.1 Exercise selection, setup and recording . . . . . . . . . . . . . . . . . . . 69 6.4.2 Data corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.3 Sequence pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.4.4 Choice of classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.4.5 Training and evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7 Improved device design with application in dysarthria therapy 7.1 Device design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.1.1 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.1.2 General system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.3 Intraoral sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.1.4 Receiver and controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.1.5 Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 7.2 Hardware implementation . . . . . . . . . . . . . . . . . . . . .

    Multimedia interaction and access based on emotions:automating video elicited emotions recognition and visualization

    Get PDF
    Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2013Films are an excellent form of art that exploit our affective, perceptual and intellectual abilities. Technological developments and the trends for media convergence are turning video into a dominant and pervasive medium, and online video is becoming a growing entertainment activity on the web. Alongside, physiological measures are making it possible to study additional ways to identify and use emotions in human-machine interactions, multimedia retrieval and information visualization. The work described in this thesis has two main objectives: to develop an Emotions Recognition and Classification mechanism for video induced emotions; and to enable Emotional Movie Access and Exploration. Regarding the first objective, we explore recognition and classification mechanisms, in order to allow video classification based on emotions, and to identify each user’s emotional states providing different access mechanisms. We aim to provide video classification and indexing based on emotions, felt by the users while watching movies. In what concerns the second objective, we focus on emotional movie access and exploration mechanisms to find ways to access and visualize videos based on their emotional properties and users’ emotions and profiles. In this context, we designed a set of methods to access and watch the movies, both at the level of the whole movie collection, and at the individual movies level. The automatic recognition mechanism developed in this work allows for the detection of physiologic patterns, indeed providing valid individual information about users emotion while they were watching a specific movie; in addition, the user interface representations and exploration mechanisms proposed and evaluated in this thesis, show that more perceptive, satisfactory and useful visual representations influenced positively the exploration of emotional information in movies.Fundação para a Ciência e a Tecnologia (FCT, PROTEC SFRH/BD/49475/2009, LASIGE Multiannual Funding e VIRUS projecto (PTDC/EIAEIA/101012/2008

    Multimedia

    Get PDF
    The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications
    • …
    corecore