91 research outputs found

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation prĂ€sentiert eine multimodale Architektur zum Gebrauch in mobilen UmstĂ€nden wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen UmstĂ€nden analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte VorfĂŒhrungsprogramme zum \u27stand-alone\u27; Gebrauch auf mobilen GerĂ€ten entworfen. Von spezieller Wichtigkeit war der Entwurf und die AusfĂŒhrung eines ModalitĂ€ts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die AusfĂŒhrung erlaubt die VerĂ€nderung von ZuverlĂ€ssigkeitswerten innerhalb einzelner ModalitĂ€ten und außerdem ermöglicht eine Methode um die semantisch ĂŒberlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische MultimodalitĂ€t sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben ĂŒber die bevorzugte Art verschiedener ModalitĂ€tskombinationen, sowie auch ĂŒber die Akzeptanz von anthropomorphisierten Objekten

    Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations

    Get PDF
    This dissertation presents a multimodal architecture for use in mobile scenarios such as shopping and navigation. It also analyses a wide range of feasible modality input combinations for these contexts. For this purpose, two interlinked demonstrators were designed for stand-alone use on mobile devices. Of particular importance was the design and implementation of a modality fusion module capable of combining input from a range of communication modes like speech, handwriting, and gesture. The implementation is able to account for confidence value biases arising within and between modalities and also provides a method for resolving semantically overlapped input. Tangible interaction with real-world objects and symmetric multimodality are two further themes addressed in this work. The work concludes with the results from two usability field studies that provide insight on user preference and modality intuition for different modality combinations, as well as user acceptance for anthropomorphized objects.Diese Dissertation prĂ€sentiert eine multimodale Architektur zum Gebrauch in mobilen UmstĂ€nden wie z. B. Einkaufen und Navigation. Außerdem wird ein großes Gebiet von möglichen modalen Eingabekombinationen zu diesen UmstĂ€nden analysiert. Um das in praktischer Weise zu demonstrieren, wurden zwei teilweise gekoppelte VorfĂŒhrungsprogramme zum 'stand-alone'; Gebrauch auf mobilen GerĂ€ten entworfen. Von spezieller Wichtigkeit war der Entwurf und die AusfĂŒhrung eines ModalitĂ€ts-fusion Modul, das die Kombination einer Reihe von Kommunikationsarten wie Sprache, Handschrift und Gesten ermöglicht. Die AusfĂŒhrung erlaubt die VerĂ€nderung von ZuverlĂ€ssigkeitswerten innerhalb einzelner ModalitĂ€ten und außerdem ermöglicht eine Methode um die semantisch ĂŒberlappten Eingaben auszuwerten. Wirklichkeitsnaher Dialog mit aktuellen Objekten und symmetrische MultimodalitĂ€t sind zwei weitere Themen die in dieser Arbeit behandelt werden. Die Arbeit schließt mit Resultaten von zwei Feldstudien, die weitere Einsicht erlauben ĂŒber die bevorzugte Art verschiedener ModalitĂ€tskombinationen, sowie auch ĂŒber die Akzeptanz von anthropomorphisierten Objekten

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Contributions of the medial etorhinal and postrhinal cortices to spatial and contextual processing

    Full text link
    Thesis (Ph.D.)--Boston UniversityThe medial entorhinal and the postrhinal cortices are regions thought to critically mediate spatial and contextual processing in the medial temporal lobe. However, the manner by which these cortices contribute to spatial processing is not unequivocally established. In the first study, the entorhinal cortex was periodically disrupted utilizing the optogenetic inhibitory opsin, archaerhodopsin (ArchT) while the spatial firing of hippocampal neurons were recorded in adult rats. If the medial entorhinal cortex is an essential driver of spatial responses in the hippocampus, it would be expected that changes in hippocampal neuron firing would be specifically time-locked to when the inactivating laser is on. Instead, entorhinal disruption causes a subset of cells to remap only once during the repeated inactivations; once altered, the remapped cells maintain their new firing patterns irrespective of whether the laser is on or not. This remapping, however, does not lead to a net change in the spatial information coded across the hippocampal neuron population. This suggests that disrupting medial entorhinal inputs does not change the resolution of the spatial representation, but instead changes which hippocampal ensemble represents the environment. The participation of the postrhinal cortex in processing spatial contexts was examined in two experiments. The first experiment examined whether the spatial context of an object influences its perceived familiarity, and whether lesions of the postrhinal cortex diminish this effect. The second experiment of this study investigated whether animals with postrhinal ablations can use spatial context to conditionally discriminate which item contains a reward. No deficit was observed on either experiment, suggesting that the postrhinal cortex is not critical for processing spatial contexts. Though these results suggest that spatial processing is not the essential function of these cortices, they do not eliminate the possibility that spatial information may be one of several sources contributing to their computations. Overall, these studies suggest that the entorhinal and postrhinal cortices use multidimensional information to bias which ensembles are active in the medial temporal lobe, thereby dictating how objects and their relationships are processed

    SiAM-dp : an open development platform for massively multimodal dialogue systems in cyber-physical environments

    Get PDF
    Cyber-physical environments enhance natural environments of daily life such as homes, factories, offices, and cars by connecting the cybernetic world of computers and communication with the real physical world. While under the keyword of Industrie 4.0, cyber-physical environments will take a relevant role in the next industrial revolution, and they will also appear in homes, offices, workshops, and numerous other areas. In this new world, classical interaction concepts where users exclusively interact with a single stationary device, PC or smartphone become less dominant and make room for new occurrences of interaction between humans and the environment itself. Furthermore, new technologies and a rising spectrum of applicable modalities broaden the possibilities for interaction designers to include more natural and intuitive non-verbal and verbal communication. The dynamic characteristic of a cyber-physical environment and the mobility of users confronts developers with the challenge of developing systems that are flexible concerning the connected and used devices and modalities. This implies new opportunities for cross-modal interaction that go beyond dual modalities interaction as is well known nowadays. This thesis addresses the support of application developers with a platform for the declarative and model based development of multimodal dialogue applications, with a focus on distributed input and output devices in cyber-physical environments. The main contributions can be divided into three parts: - Design of models and strategies for the specification of dialogue applications in a declarative development approach. This includes models for the definition of project resources, dialogue behaviour, speech recognition grammars, and graphical user interfaces and mapping rules, which convert the device specific representation of input and output description to a common representation language. - The implementation of a runtime platform that provides a flexible and extendable architecture for the easy integration of new devices and components. The platform realises concepts and strategies of multimodal human-computer interaction and is the basis for full-fledged multimodal dialogue applications for arbitrary device setups, domains, and scenarios. - A software development toolkit that is integrated in the Eclipse rich client platform and provides wizards and editors for creating and editing new multimodal dialogue applications.Cyber-physische Umgebungen (CPEs) erweitern natĂŒrliche Alltagsumgebungen wie Heim, Fabrik, BĂŒro und Auto durch Verbindung der kybernetischen Welt der Computer und Kommunikation mit der realen, physischen Welt. Die möglichen Anwendungsgebiete hierbei sind weitreichend. WĂ€hrend unter dem Stichwort Industrie 4.0 cyber-physische Umgebungen eine bedeutende Rolle fĂŒr die nĂ€chste industrielle Revolution spielen werden, erhalten sie ebenfalls Einzug in Heim, BĂŒro, Werkstatt und zahlreiche weitere Bereiche. In solch einer neuen Welt geraten klassische Interaktionskonzepte, in denen Benutzer ausschließlich mit einem einzigen GerĂ€t, PC oder Smartphone interagieren, immer weiter in den Hintergrund und machen Platz fĂŒr eine neue AusprĂ€gung der Interaktion zwischen dem Menschen und der Umgebung selbst. DarĂŒber hinaus sorgen neue Technologien und ein wachsendes Spektrum an einsetzbaren ModalitĂ€ten dafĂŒr, dass sich im Interaktionsdesign neue Möglichkeiten fĂŒr eine natĂŒrlichere und intuitivere verbale und nonverbale Kommunikation auftun. Die dynamische Natur von cyber-physischen Umgebungen und die MobilitĂ€t der Benutzer darin stellt Anwendungsentwickler vor die Herausforderung, Systeme zu entwickeln, die flexibel bezĂŒglich der verbundenen und verwendeten GerĂ€te und ModalitĂ€ten sind. Dies impliziert auch neue Möglichkeiten in der modalitĂ€tsĂŒbergreifenden Kommunikation, die ĂŒber duale Interaktionskonzepte, wie sie heutzutage bereits ĂŒblich sind, hinausgehen. Die vorliegende Arbeit befasst sich mit der UnterstĂŒtzung von Anwendungsentwicklern mit Hilfe einer Plattform zur deklarativen und modellbasierten Entwicklung von multimodalen Dialogapplikationen mit einem Fokus auf verteilte Ein- und AusgabegerĂ€te in cyber-physischen Umgebungen. Die bearbeiteten Aufgaben können grundlegend in drei Teile gegliedert werden: - Die Konzeption von Modellen und Strategien fĂŒr die Spezifikation von Dialoganwendungen in einem deklarativen Entwicklungsansatz. Dies beinhaltet Modelle fĂŒr das Definieren von Projektressourcen, Dialogverhalten, Spracherkennergrammatiken, graphischen Benutzerschnittstellen und Abbildungsregeln, die die gerĂ€tespezifische Darstellung von Ein- und AusgabegerĂ€ten in eine gemeinsame ReprĂ€sentationssprache transformieren. - Die Implementierung einer Laufzeitumgebung, die eine flexible und erweiterbare Architektur fĂŒr die einfache Integration neuer GerĂ€te und Komponenten bietet. Die Plattform realisiert Konzepte und Strategien der multimodalen Mensch-Maschine-Interaktion und ist die Basis vollwertiger multimodaler Dialoganwendungen fĂŒr beliebige DomĂ€nen, Szenarien und GerĂ€tekonfigurationen. - Eine Softwareentwicklungsumgebung, die in die Eclipse Rich Client Plattform integriert ist und Entwicklern Assistenten und Editoren an die Hand gibt, die das Erstellen und Editieren von neuen multimodalen Dialoganwendungen unterstĂŒtzen

    Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles

    Get PDF
    Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners. This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)

    The attentive robot companion: learning spatial information from observation and verbal interaction

    Get PDF
    Ziegler L. The attentive robot companion: learning spatial information from observation and verbal interaction. Bielefeld: UniversitĂ€t Bielefeld; 2015.This doctoral thesis investigates how a robot companion can gain a certain degree of situational awareness through observation and interaction with its surroundings. The focus lies on the representation of the spatial knowledge gathered constantly over time in an indoor environment. However, from the background of research on an interactive service robot, methods for deployment in inference and verbal communication tasks are presented. The design and application of the models are guided by the requirements of referential communication. The approach here involves the analysis of the dynamic properties of structures in the robot’s field of view allowing it to distinguish objects of interest from other agents and background structures. The use of multiple persistent models representing these dynamic properties enables the robot to track changes in multiple scenes over time to establish spatial and temporal references. This work includes building a coherent representation considering allocentric and egocentric aspects of spatial knowledge for these models. Spatial analysis is extended with a semantic interpretation of objects and regions. This top-down approach for generating additional context information enhances the grounding process in communication. A holistic, boosting-based classification approach using a wide range of 2D and 3D visual features anchored in the spatial representation allows the system to identify room types. The process of grounding referential descriptions from a human interlocutor in the spatial representation is evaluated through referencing furniture. This method uses a probabilistic network for handling ambiguities in the descriptions and employs a strategy for resolving conflicts. In order to approve the real-world applicability of these approaches, this system was deployed on the mobile robot BIRON in a realistic apartment scenario involving observation and verbal interaction with an interlocutor

    Answering questions about archived, annotated meetings

    Get PDF
    Retrieving information from archived meetings is a new domain of information retrieval that has received increasing attention in the past few years. Search in spontaneous spoken conversations has been recognized as more difficult than text-based document retrieval because meeting discussions contain two levels of information: the content itself, i.e. what topics are discussed, but also the argumentation process, i.e. what conflicts are resolved and what decisions are made. To capture the richness of information in meetings, current research focuses on recording meetings in Smart-Rooms, transcribing meeting discussion into text and annotating discussion with semantic higher-level structures to allow for efficient access to the data. However, it is not yet clear what type of user interface is best suited for searching and browsing such archived, annotated meetings. Content-based retrieval with keyword search is too naive and does not take into account the semantic annotations on the data. The objective of this thesis is to assess the feasibility and usefulness of a natural language interface to meeting archives that allows users to ask complex questions about meetings and retrieve episodes of meeting discussions based on semantic annotations. The particular issues that we address are: the need of argumentative annotation to answer questions about meetings; the linguistic and domain-specific natural language understanding techniques required to interpret such questions; and the use of visual overviews of meeting annotations to guide users in formulating questions. To meet the outlined objectives, we have annotated meetings with argumentative structure and built a prototype of a natural language understanding engine that interprets questions based on those annotations. Further, we have performed two sets of user experiments to study what questions users ask when faced with a natural language interface to annotated meeting archives. For this, we used a simulation method called Wizard of Oz, to enable users to express questions in their own terms without being influenced by limitations in speech recognition technology. Our experimental results show that technically it is feasible to annotate meetings and implement a deep-linguistic NLU engine for questions about meetings, but in practice users do not consistently take advantage of these features. Instead they often search for keywords in meetings. When visual overviews of the available annotations are provided, users refer to those annotations in their questions, but the complexity of questions remains simple. Users search with a breadth-first approach, asking questions in sequence instead of a single complex question. We conclude that natural language interfaces to meeting archives are useful, but that more experimental work is needed to find ways to incent users to take advantage of the expressive power of natural language when asking questions about meetings

    Crossmodal displays : coordinated crossmodal cues for information provision in public spaces

    Get PDF
    PhD ThesisThis thesis explores the design of Crossmodal Display, a new kind of display-based interface that aims to help prevent information overload and support information presentation for multiple simultaneous people who share a physical space or situated interface but have different information needs and privacy concerns. By exploiting the human multimodal perception and utilizing the synergy of both existing public displays and personal displays, crossmodal displays avoid numerous drawbacks associated with previous approaches, including a reliance on tracking technologies, weak protection for user‟s privacy, small user capacity and high cognitive load demands. The review of the human multimodal perception in this thesis, especially multimodal integration and crossmodal interaction, has many design implications for the design of crossmodal displays and constitutes the foundation for our proposed conceptual model. Two types of crossmodal display prototype applications are developed: CROSSFLOW for indoor navigation and CROSSBOARD for information retrieval on high-density information display; both of these utilize coordinated crossmodal cues to guide multiple simultaneous users‟ attention to publicly visible information relevant to each user timely. Most of the results of single-user and multi-user lab studies on the prototype systems we developed in this research demonstrate the effectiveness and efficiency of crossmodal displays and validate several significant advantages over the previous solutions. However, the results also reveal that more detailed usability and user experience of crossmodal displays as well as the human perception of crossmodal cues should be investigated and improved. This thesis is the first exploration into the design of crossmodal displays. A set of design suggestions and a lifecycle model of crossmodal display development have been produced, and can be used by designers or other researchers who wish to develop crossmodal displays for their applications or integrate crossmodal cues in their interfaces
    • 

    corecore