14 research outputs found

    Human Handheld-Device Interaction: An Adaptive User Interface

    No full text
    The move to smaller, lighter and more powerful (mobile) handheld devices, whe-ther PDAs or smart-phones, looks like a trend that is building up speed. With numerous embedded technologies and wireless connectivity, the drift opens up unlimited opportunities in daily activities that are both more efficient and more exciting. Despite all these advancing possibilities, the shrinking size and the mobile use impose challenges for both technical and usability aspects of the devices and their applications. An adaptive user interface, that is able to autonomously adjust its display and available actions to current goals, contexts and emotions of its user, represents solutions for limited input options, various constraints of the output presentation, and user requirements due to mobility and attention shifting in human handheld-device interaction. The present work made preliminary steps in proposing a framework for a rapid construction of adaptive user interfaces that are multimodal, context-aware and affective, on handheld devices. The framework consists of predefined modules that are able to work in isolation but can also be connected in an ad hoc way as part of the framework. The modules deal with human handheld-device interaction, the interpretation of the user's actions, knowledge structure and management, the selection of appropriate responses and the presentation of feedback. Human language and visual perception models have been studied in formulating concepts or ideas as both text and visual language-based messages. An adaptive circular on-screen keyboard and visual language-based interfaces have been proposed as alternative input options for fast interaction. In particular, sentences in the visual language can be constructed using spatial arrangements of visual symbols, such as icons, lines, arrows and ellipses. As icons offer a potential across language barriers, any interaction using the visual language is suitable for language-independent contexts. Personalized predictive and language-based features have also been added to accelerate both input methods. An ontology has been chosen to represent knowledge of the user, the task and the world. The modeling and structure of the knowledge representation has been designed for sharing common semantics, integrating the communication inter-modules, and fulfilling the context aware requirement. It enables the framework to be developed into a widespread application for different domains. The context awareness is approached by interpreting both verbal and non-verbal aspects of user inputs to update the system's belief about the user, the task and the world. Methods and techniques to fuse multiple input modalities for multiple messages from multiple users into a coherence and context dependent interpretation have been developed. A simple approach to emotion analysis has been proposed to interpret the nonverbal aspect of the inputs. It is based on a keyword spotting approach by categorizing the emotional state into a certain valence orientation with intensity. The approach is suitable for a high uncertainties input recognition. Template-based interaction management and output generation methods have been developed. The templates have a direct link to concepts in the ontology-based knowledge representation. This approach supports a common semantic with other modules within the framework. It allows the development of a bigger scale system with consistent and easy to verify knowledge repositories. A multimodal, multi-user, and multi-device communication system in the field of crisis management built based on the framework has been developed as a proof of the proposed concepts. This system consists of comprehensive selected modules for reporting and collaborating observations using handheld devices in mobile ad-hoc network-based communication. It supports communication using the combination of text, visual language and graphics. The system is able to interpret user messages, construct knowledge of the user, the task and the world, and develop a crisis scenario. User tests were aimed at an assessment of whether or not users are capable of expressing their messages using the provided modalities. The tests also addressed usability issues on interacting with an adaptive user interface on handheld devices. The experimental results indicated that the adaptive user interface is able to support communication between users and between users and their handheld devices. Moreover, an explorative study within this research has also generated knowledge regarding (technical, social and usability aspects of) user requirements in adaptive user interfaces and (generally) human handheld-device interaction. The rationale behind our approaches, designs, empirical evaluations and implications for research on the framework for an adaptive user interface on handheld devices are also described in this thesis.Man Machine Interaction, MediamaticsElectrical Engineering, Mathematics and Computer Scienc

    Bayesian Classification of Disaster Events on the Basis of Icon Messages of Observers.

    No full text
    During major disaster events, human operators in a crisis center will be overloaded with a flood of phone calls. As an increasing number of people in and around big cities do not master the native language, the need for automated systems that automatically process the context and content of information about disaster situations from the communicated messages becomes apparent. To support language-independent communication and to reduce the ambiguity and multitude semantics, we developed an icon-based reporting observation system. Contrast to previous approaches of such a system, we link icon messages to disaster events without using Natural Language Processing. We developed a dedicated set of icons related to the context and characteristic features of disaster events. The developed system is able to compute the probability of the appearance of possible disaster events using Bayesian reasoning. In this paper, we present the reporting system, the developed icons, the Bayesian model, and the results of a user test.Intelligent InteractionElectrical Engineering, Mathematics and Computer Scienc

    An automated text-based synthetic face with emotions for web lectures

    No full text
    Web lectures have many positive aspects, e.g. they enable learners to easily control the learning experiences). To develop high-quality online learning materials takes a lot of time and human efforts [2]. An alternative is to develop a digital teacher. We have developed a prototype of a synthetic 3D face that shows emotion associated to text-based speech in an automated way. As a first step, we studied how humans express emotions in face-to-face communication. Based on this study, we have developed a 2D affective lexicon and a set of rules that describes dependencies between linguistic contents and emotions.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Computed Ontology-based Situation Awareness of Multi-User Observations

    No full text
    In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-device, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration (input fusion) and multimodal presentation (output fission) modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce contextsensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system’s view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Dynamic Routing during Disaster Events

    No full text
    Innovations in mobile technology allow people to request route information on their smartphone to reach safe areas during emergency and disaster evacuations. In return, the affected people in the field can send their observation reports, e.g. using a dedicated icon-based disaster language. However, given the turbulent nature of disaster situations, the people and systems at crisis center are subjected to information overload, which can obstruct timely and accurate information sharing. A dynamic and automated evacuation plan that is able to predict future disaster outcome can be used to coordinate the affected people to safety in times of crisis. In this paper, we present a dynamic version of the shortest path algorithm of Dijkstra. The algorithm is able to compute the shortest path from the GPS-based location of the user (sent by the smartphone) to the safety area by taking into account possible affected areas in future. In the case of a toxic cloud, the “plume” model has been used to predict the path of the “plume” and to compute affected areas in the future. The algorithm has been tested in a simulation study and used in an experiment during a simulated crisis event in the city of Rotterdam.Intelligent InteractionElectrical Engineering, Mathematics and Computer Scienc

    Utilizing the Potential of the Affected Population and Prevalent Mobile Technology during Disaster Response: Propositions for a Literature Survey

    No full text
    Despite the growing awareness of the untapped potential of the affected population in a disaster situation, their inclusion in a disaster management is extremely limited. This study aims to survey the literature to see whether utilizing the affected people and prevalent mobile technology can be used during disaster response. The idea is to provide the affected with a way to lead themselves to safety and empower them to serve as distributed active sources of information. This way, those people will reach safety by themselves, while at the same time helping to construct a clear image of the disaster situation without burdening the already overwhelmed emergency services. This study examines knowledge derived from disaster sociology, draws on experience from recent disasters, and extrapolates current technological solutions. By establishing that such a solution is feasible, it offers a basis for empirical studies on a mobile technology that can be used during disaster response.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Questionnaire Items for Evaluating Artificial Social Agents - Expert Generated, Content Validated and Reliability Analysed

    No full text
    In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 90 researchers worldwide, researching the IVA community interests and practice in evaluating human interaction with an artificial social agent (ASA). The joint efforts have previously generated a unified set of 19 constructs that capture more than 80% of constructs used in empirical studies published in the IVA conference between 2013 to 2018. In this paper, we present expert-content-validated 131 questionnaire items for the constructs and their dimensions, and investigate the level of reliability. We establish this in three phases. Firstly, eight experts generated 431 potential construct items. Secondly, 20 experts rated whether items measure (only) their intended construct, resulting in 207 content-validated items. Next, a reliability analysis was conducted, involving 192 crowd-workers who were asked to rate a human interaction with an ASA, which resulted in 131 items (about 5 items per measurement, with Cronbach's alpha ranged [.60 - .87]). These are the starting points for the questionnaire instrument of human-ASA interaction.Interactive Intelligenc

    The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis

    No full text
    In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 80 researchers worldwide, researching the IVA community interests and practises in evaluating human interaction with an artificial social agent (ASA). The effort is driven by previous IVA workshops and plenary IVA discussions related to the methodological crisis on the evaluation of ASAs. A previous literature review showed a continuous practise of creating new questionnaires instead of reusing validated questionnaires. We address this issue by examining questionnaire measurement constructs used in empirical studies between 2013 to 2018 published in the IVA conference. We identified 189 constructs used in 89 questionnaires that are reported across 81 studies. Although these constructs have different names, they often measure the same thing. In this paper, we, therefore, present a unifying set of 19 constructs that captures more than 80% of the 189 constructs initially identified. We established this set in two steps. First, 49 researchers classified the constructs in broad theoretically based categories. Next, 23 researchers grouped the constructs in each category on their similarity. The resulting 19 groups form a unifying set of constructs, which will be the basis for the future questionnaire instrument of human-ASA interaction.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Interactive Intelligenc

    What are we measuring anyway?: -A literature survey of questionnaires used in studies reported in the intelligent virtual agent conferences

    Get PDF
    Research into artificial social agents aims at constructing these agents and at establishing an empirically grounded understanding of them, their interaction with humans, and howthey can ultimately deliver certain outcomes in areas such as health, entertainment, and education. Key for establishing such understanding is the community's ability to describe and replicate their observations on how users perceive and interact with their agents. In this paper, we address this ability by examining questionnaires and their constructs used in empirical studies reported in the intelligent virtual agent conference proceedings from 2013 to 2018. The literature survey shows the identification of 189 constructs used in 89 questionnaires thatwere reported across 81 papers.We found unexpectedly little repeated use of questionnaires as the vast majority of questionnaires (more than 76%) were only reported in a single paper. We expect that this finding will motivate joint effort by the IVA community towards creating a unified measurement instrument.Interactive Intelligenc

    The artificial-social-agent questionnaire: Establishing the long and short questionnaire versions

    No full text
    We present the ASA Questionnaire, an instrument for evaluating human interaction with an artificial social agent (ASA), resulting from multi-year efforts involving more than 100 Intelligent Virtual Agent (IVA) researchers worldwide. It has 19 measurement constructs constituted by 90 items, which capture more than 80% of the constructs identified in empirical studies published in the IVA conference 2013 - 2018. This paper reports on construct validity analysis, specifically convergent and discriminant validity of initial 131 instrument items that involved 532 crowd-workers who were asked to rate human interaction with 14 different ASAs. The analysis included several factor analysis models and resulted in the selection of 90 items for inclusion in the long version of the ASA questionnaire. In addition, a representative item of each construct or dimension was selected to create a 24-item short version of the ASA questionnaire. Whereas the long version is suitable for a comprehensive evaluation of human-ASA interaction, the short version allows quick analysis and description of the interaction with the ASA. To support reporting ASA questionnaire results, we also put forward an ASA chart. The chart provides a quick overview of the agent profile. Interactive Intelligenc
    corecore