77 research outputs found

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation präsentiert eine multimodale Architektur zum Gebrauch in mobilen Umständen wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen Umständen analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte Vorführungsprogramme zum \u27stand-alone\u27; Gebrauch auf mobilen Geräten entworfen. Von spezieller Wichtigkeit war der Entwurf und die Ausführung eines Modalitäts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die Ausführung erlaubt die Veränderung von Zuverlässigkeitswerten innerhalb einzelner Modalitäten und außerdem ermöglicht eine Methode um die semantisch überlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische Multimodalität sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben über die bevorzugte Art verschiedener Modalitätskombinationen, sowie auch über die Akzeptanz von anthropomorphisierten Objekten

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation präsentiert eine multimodale Architektur zum Gebrauch in mobilen Umständen wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen Umständen analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte Vorführungsprogramme zum 'stand-alone'; Gebrauch auf mobilen Geräten entworfen. Von spezieller Wichtigkeit war der Entwurf und die Ausführung eines Modalitäts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die Ausführung erlaubt die Veränderung von Zuverlässigkeitswerten innerhalb einzelner Modalitäten und außerdem ermöglicht eine Methode um die semantisch überlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische Multimodalität sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben über die bevorzugte Art verschiedener Modalitätskombinationen, sowie auch über die Akzeptanz von anthropomorphisierten Objekten

    Semantic multimedia modelling & interpretation for annotation

    Get PDF
    The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems

    Visual Speech and Speaker Recognition

    Get PDF
    This thesis presents a learning based approach to speech recognition and person recognition from image sequences. An appearance based model of the articulators is learned from example images and is used to locate, track, and recover visual speech features. A major difficulty in model based approaches is to develop a scheme which is general enough to account for the large appearance variability of objects but which does not lack in specificity. The method described here decomposes the lip shape and the intensities in the mouth region into weighted sums of basis shapes and basis intensities, respectively, using a Karhunen-Loéve expansion. The intensities deform with the shape model to provide shape independent intensity information. This information is used in image search, which is based on a similarity measure between the model and the image. Visual speech features can be recovered from the tracking results and represent shape and intensity information. A speechreading (lip-reading) system is presented which models these features by Gaussian distributions and their temporal dependencies by hidden Markov models. The models are trained using the EM-algorithm and speech recognition is performed based on maximum posterior probability classification. It is shown that, besides speech information, the recovered model parameters also contain person dependent information and a novel method for person recognition is presented which is based on these features. Talking persons are represented by spatio-temporal models which describe the appearance of the articulators and their temporal changes during speech production. Two different topologies for speaker models are described: Gaussian mixture models and hidden Markov models. The proposed methods were evaluated for lip localisation, lip tracking, speech recognition, and speaker recognition on an isolated digit database of 12 subjects, and on a continuous digit database of 37 subjects. The techniques were found to achieve good performance for all tasks listed above. For an isolated digit recognition task, the speechreading system outperformed previously reported systems and performed slightly better than untrained human speechreaders

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Semantic Decision Support for Information Fusion Applications

    Get PDF
    La thèse s'inscrit dans le domaine de la représentation des connaissances et la modélisation de l'incertitude dans un contexte de fusion d'informations. L'idée majeure est d'utiliser les outils sémantiques que sont les ontologies, non seulement pour représenter les connaissances générales du domaine et les observations, mais aussi pour représenter les incertitudes que les sources introduisent dans leurs observations. Nous proposons de représenter ces incertitudes au travers d'une méta-ontologie (DS-ontology) fondée sur la théorie des fonctions de croyance. La contribution de ce travail porte sur la définition d'opérateurs d'inclusion et d'intersection sémantique et sur lesquels s'appuie la mise en œuvre de la théorie des fonctions de croyance, et sur le développement d'un outil appelé FusionLab permettant la fusion d'informations sémantiques à partir du développement théorique précédent. Une application de ces travaux a été réalisée dans le cadre d'un projet de surveillance maritime.This thesis is part of the knowledge representation domain and modeling of uncertainty in a context of information fusion. The main idea is to use semantic tools and more specifically ontologies, not only to represent the general domain knowledge and observations, but also to represent the uncertainty that sources may introduce in their own observations. We propose to represent these uncertainties and semantic imprecision trough a metaontology (called DS-Ontology) based on the theory of belief functions. The contribution of this work focuses first on the definition of semantic inclusion and intersection operators for ontologies and on which relies the implementation of the theory of belief functions, and secondly on the development of a tool called FusionLab for merging semantic information within ontologies from the previous theorical development. These works have been applied within a European maritime surveillance project.ROUEN-INSA Madrillet (765752301) / SudocSudocFranceF

    Analyse multimodale de situations conflictuelles en contexte véhicule

    Get PDF
    Dans cette thèse nous étudions les interactions humaines afin d'identifier des situations conflictuelles dans l'habitacle d'un véhicule. Les humains utilisent le plus communément la vue et l'ouïe pour analyser les interactions. Cette tâche paraît anodine, mais reste complexe pour un modèle d'intelligence artificielle. Celui-ci doit capturer les informations vidéo et audio et les analyser pour prédire une situation conflictuelle. Notre approche est nouvelle en regard des recherches réalisées jusque-là sur ce sujet puisque les passagers sont contraints dans leurs mouvements dans l'habitacle et que la puissance de calcul embarqué pour cette tâche est limitée. Aucuns travaux, à notre connaissance, ne se sont intéressés à l'analyse des interactions humaines pour la détection de situations conflictuelles dans ce contexte et avec ces contraintes. Nos investigations s'appuient tout d'abord sur un corpus public d'analyse de sentiment pour se comparer à la littérature. Nous implémentons un modèle capable d'ingérer des données vidéo, audio et textes (transcription de l'audio) pour les fusionner et prendre une décision. Dans notre contexte applicatif, nous enregistrons par la suite un jeu de données multimodal d'interactions humaines simulant des situations plus ou moins conflictuelles dans un habitacle de véhicule. Cette base de données est exploitée afin d'implémenter des modèles de classification de bout-en-bout et paramétrique. Les résultats obtenus sont cohérents avec la littérature sur l'impact de chaque modalité sur les performances du système. Ainsi, le texte est respectivement plus informatif que l'audio et que la vidéo. Les différentes approches de fusion implémentées montrent des bénéfices notables sur les performances de classification mono-modalité.In this thesis we study human interactions in order to identify potential aggression situations in the vehicle cabin. Humans most commonly use sight and hearing to analyze interactions. This task seems trivial, but is complex for an artificial intelligence model. It must capture video and audio information and analyze it to predict a conflictual situation. Our approach is new compared to previous research on this topic since passengers are constrained in their movements in the cabin and the computing power on board for this task is limited. To our knowledge, no work has been done on the analysis of human interactions for conflictual situation detection in this context and with these constraints. Our investigations are first based on a public corpus of sentiment analysis to compare with the literature. We implement a model capable of ingesting video, audio and text data (audio transcription) to merge them and make a decision. In our application context, we then record a multimodal dataset of human interactions simulating more or less conflictual situations in a vehicle cockpit. This database is exploited to implement end-to-end and parametric classification models. The results obtained are consistent with the literature on the impact of each modality on the system performance. Thus, text is respectively more informative than audio and video. The different fusion approaches implemented show significant benefits on the performance of single-modality classification
    • …
    corecore