84 research outputs found

    Design and evaluation of adaptive multimoldal systems

    Get PDF
    Tese de doutoramento em Informática (Engenharia Informática), presentada à Universidade de Lisboa através da Faculdade de Ciências, 2008This thesis focuses on the design and evaluation of adaptive multi-modal systems. The design of such systems is approached from an integrated perspective, with the goal of obtaining a solution where aspects related to both adaptive and multimodal systems are considered. The result is FAME, a model based framework for the design and development of adaptive multimodal systems, where adaptive capabilities impact directly over the process of multimodal fusion and fission operations. FAME over views the design of systems capable of adapting to a diversified context, including variations in users,execution platform, and environment. FAME represents an evolution from previous frameworks by incorporating aspects specific to multimodal interfaces directly in the development of an adaptive platform. One of FAME's components is the Behavioral Matrix, a multi purpose instrument, used during the design phase to represent the adaptation rules. In addition, the Behavioral Matrix is also the component responsible for bridging the gap between design and evaluation stages. Departing from an analogy between transitionnet works for representing interaction with a system, and behavioral spaces, the Behavioral Matrix makes possible the application of behavioral complexity metrics to general adaptive systems. Moreover,this evaluation is possible during the design stages,which translates into a reduction of there sources required for evaluation of adaptive systems.The Behavior al Matrix allows a designer to emulate the behavior of anon-adaptiveversionoftheadaptivesystem,allowing for comparison of the versions, one of the most used approaches to adaptive systems evaluation. In addition, the designer may also emulate the behavior of different user profiles and compare their complexity measures. The feasibility of FAME was demonstrated with the development of an adaptive multimodal Digital Book Player. The process was successful, as demonstrated by usability evaluations. Besides these evaluations, behavioral complexity metrics, computed in accordance with the proposed methodology, were able to discern between adaptive and non-adaptive versions of the player. When applied to user profiles of different perceived complexity, the metrics were also able to detect the different interaction complexity.FCT - IPSOM (POSI/PLP/34252/2000) e RiCoBA (POSC/EIA/61042/2004

    SiAM-dp : an open development platform for massively multimodal dialogue systems in cyber-physical environments

    Get PDF
    Cyber-physical environments enhance natural environments of daily life such as homes, factories, offices, and cars by connecting the cybernetic world of computers and communication with the real physical world. While under the keyword of Industrie 4.0, cyber-physical environments will take a relevant role in the next industrial revolution, and they will also appear in homes, offices, workshops, and numerous other areas. In this new world, classical interaction concepts where users exclusively interact with a single stationary device, PC or smartphone become less dominant and make room for new occurrences of interaction between humans and the environment itself. Furthermore, new technologies and a rising spectrum of applicable modalities broaden the possibilities for interaction designers to include more natural and intuitive non-verbal and verbal communication. The dynamic characteristic of a cyber-physical environment and the mobility of users confronts developers with the challenge of developing systems that are flexible concerning the connected and used devices and modalities. This implies new opportunities for cross-modal interaction that go beyond dual modalities interaction as is well known nowadays. This thesis addresses the support of application developers with a platform for the declarative and model based development of multimodal dialogue applications, with a focus on distributed input and output devices in cyber-physical environments. The main contributions can be divided into three parts: - Design of models and strategies for the specification of dialogue applications in a declarative development approach. This includes models for the definition of project resources, dialogue behaviour, speech recognition grammars, and graphical user interfaces and mapping rules, which convert the device specific representation of input and output description to a common representation language. - The implementation of a runtime platform that provides a flexible and extendable architecture for the easy integration of new devices and components. The platform realises concepts and strategies of multimodal human-computer interaction and is the basis for full-fledged multimodal dialogue applications for arbitrary device setups, domains, and scenarios. - A software development toolkit that is integrated in the Eclipse rich client platform and provides wizards and editors for creating and editing new multimodal dialogue applications.Cyber-physische Umgebungen (CPEs) erweitern natürliche Alltagsumgebungen wie Heim, Fabrik, Büro und Auto durch Verbindung der kybernetischen Welt der Computer und Kommunikation mit der realen, physischen Welt. Die möglichen Anwendungsgebiete hierbei sind weitreichend. Während unter dem Stichwort Industrie 4.0 cyber-physische Umgebungen eine bedeutende Rolle für die nächste industrielle Revolution spielen werden, erhalten sie ebenfalls Einzug in Heim, Büro, Werkstatt und zahlreiche weitere Bereiche. In solch einer neuen Welt geraten klassische Interaktionskonzepte, in denen Benutzer ausschließlich mit einem einzigen Gerät, PC oder Smartphone interagieren, immer weiter in den Hintergrund und machen Platz für eine neue Ausprägung der Interaktion zwischen dem Menschen und der Umgebung selbst. Darüber hinaus sorgen neue Technologien und ein wachsendes Spektrum an einsetzbaren Modalitäten dafür, dass sich im Interaktionsdesign neue Möglichkeiten für eine natürlichere und intuitivere verbale und nonverbale Kommunikation auftun. Die dynamische Natur von cyber-physischen Umgebungen und die Mobilität der Benutzer darin stellt Anwendungsentwickler vor die Herausforderung, Systeme zu entwickeln, die flexibel bezüglich der verbundenen und verwendeten Geräte und Modalitäten sind. Dies impliziert auch neue Möglichkeiten in der modalitätsübergreifenden Kommunikation, die über duale Interaktionskonzepte, wie sie heutzutage bereits üblich sind, hinausgehen. Die vorliegende Arbeit befasst sich mit der Unterstützung von Anwendungsentwicklern mit Hilfe einer Plattform zur deklarativen und modellbasierten Entwicklung von multimodalen Dialogapplikationen mit einem Fokus auf verteilte Ein- und Ausgabegeräte in cyber-physischen Umgebungen. Die bearbeiteten Aufgaben können grundlegend in drei Teile gegliedert werden: - Die Konzeption von Modellen und Strategien für die Spezifikation von Dialoganwendungen in einem deklarativen Entwicklungsansatz. Dies beinhaltet Modelle für das Definieren von Projektressourcen, Dialogverhalten, Spracherkennergrammatiken, graphischen Benutzerschnittstellen und Abbildungsregeln, die die gerätespezifische Darstellung von Ein- und Ausgabegeräten in eine gemeinsame Repräsentationssprache transformieren. - Die Implementierung einer Laufzeitumgebung, die eine flexible und erweiterbare Architektur für die einfache Integration neuer Geräte und Komponenten bietet. Die Plattform realisiert Konzepte und Strategien der multimodalen Mensch-Maschine-Interaktion und ist die Basis vollwertiger multimodaler Dialoganwendungen für beliebige Domänen, Szenarien und Gerätekonfigurationen. - Eine Softwareentwicklungsumgebung, die in die Eclipse Rich Client Plattform integriert ist und Entwicklern Assistenten und Editoren an die Hand gibt, die das Erstellen und Editieren von neuen multimodalen Dialoganwendungen unterstützen

    Multi-modal usability evaluation.

    Get PDF
    Research into the usability of multi-modal systems has tended to be device-led, with a resulting lack of theory about multi-modal interaction and how it might differ from more conventional interaction. This is compounded by a confusion over the precise definition of modality between the various disciplines within the HCI community, how modalities can be effectively classified, and their usability properties. There is a consequent lack of appropriate methodologies and notations to model such interactions and assess the usability implications of these interfaces. The role of expertise and craft skill in using HCI techniques is also poorly understood. This thesis proposes a new definition of modality, and goes on to identify issues of importance to multi-modal usability, culminating in the development of a new methodology to support the identification of such usability issues. It additionally explores the role of expertise and craft skill in using usability modelling techniques to assess usability issues. By analysing the problems inherent in current definitions and approaches, as well as issues relevant to cognitive science, a clear understanding of both the requirements for a suitable definition of modality and the salient usability issues are obtained. A novel definition of modality, based on the three elements of sense, information form and temporal nature is proposed. Further, an associated taxonomy is produced, which categorises modalities within the sensory dimension as visual, acoustic and haptic. This taxonomy classifies modalities within the information form dimension as lexical, symbolic or concrete, and classifies the temporal form dimension modalities as discrete, continuous, or dynamic. This results in a twenty-seven cell taxonomy, with each cell representing one taxon, indicating one particular type of modality. This is a faceted classification system, with the modality named after the intersection of the categories, building the category names into a compound modality name. The issues surrounding modality are examined and refined into the concepts of modality types, properties and clashes. Modalities are identified as belonging to either the system or the user, and being expressive or receptive in type. Various properties are described based on issues of granularity and redundancy. The five different types of clashes are described. Problems relating to the modelling of multi-modal interaction are examined by means of a motivating case study based on a portion of an interface for a robotic arm. The effectiveness of five modelling techniques, STN, CW, CPM-GOMS, PUM and Z, in representing multi-modal issues are assessed. From this, and using the collated definition, taxonomy and theory, a new methodology, Evaluating Multi-modal Usability (EMU), is developed. This is applied to a previous case study of the robotic arm to assess its application and coverage. Both the definition and EMU are used by students in a case study to test the definition and methodology's effectiveness, and to examine the leverage such an approach may give. The results shows that modalities can be successfully identified within an interactive context, and that usability issues can be described. Empirical video data of the robotic arm in use is used to confirm the issues identified by the previous analyses, and to identify new issues. A rational re-analysis of the six approaches (STN, CW, CPM-GOMS, PUM, Z and EMU) is conducted in order to distinguish between issues identified through craft skill, based on general HCI expertise and familiarity with the problem, and issues identified due to the core of the method for each approach. This is to gain a realistic understanding of the validity of claims made by each method, and to identify how else issues might be identified, and the consequent implications. Craft skill is found to have a wider role than anticipated, and the importance of expertise in using such approaches emphasised. From the case study and the re-analyses the implications for EMU are examined, and suggestions made for future refinement. The main contributions of this thesis are the new definition, taxonomy and theory, which significantly contribute to the theoretical understanding of multi-modal usability, helping to resolve existing confusion in this area. The new methodology, EMU, is a useful technique for examining interfaces for multi-modal usability issues, although some refinement is required. The importance of craft skill in the identification of usability issues has been explicitly explored, with implications for future work on usability modelling and the training of practitioners in such techniques

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    Interim research assessment 2003-2005 - Computer Science

    Get PDF
    This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
    corecore