7,057 research outputs found

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    WaitSuite: Productive Use of Diverse Waiting Moments

    Get PDF
    The busyness of daily life makes it difficult to find time for informal learning. Yet, learning requires significant time and effort, with repeated exposures to educational content on a recurring basis. Despite the struggle to find time, there are numerous moments in a day that are typically wasted due to waiting, such as while waiting for the elevator to arrive, wifi to connect, or an instant message to arrive. We introduce the concept of wait-learning: automatically detecting wait time and inviting people to learn while waiting. Our approach is to design seamless interactions that augment existing wait time with productive opportunities. Combining wait time with productive work opens up a new class of software systems that overcome the problem of limited time. In this article, we establish a design space for wait-learning and explore this design space by creating WaitSuite, a suite of five different wait-learning apps that each uses a different kind of waiting. For one of these apps, we conducted a feasibility study to evaluate learning and to understand how exercises should be timed during waiting periods. Subsequently, we evaluated multiple kinds of wait-learning in a two-week field study of WaitSuite with 25 people. We present design implications for wait-learning, and a theoretical framework that describes how wait time, ease of accessing the learning task, and competing demands impact the effectiveness of wait-learning in different waiting scenarios. These findings provide insight into how wait-learning can be designed to minimize interruption to ongoing tasks and maximize engagement with learning

    The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings

    Full text link
    We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as " burchak " for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include self-and other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rule-based system built previously.Comment: 10 pages, THE 6TH WORKSHOP ON VISION AND LANGUAGE (VL'17

    Wireless internet architecture and testbed for wineglass

    Get PDF
    One of the most challenging issues in the area of mobile communication is the deployment of IPbased wireless multimedia networks in public and business environments. The public branch may involve public mobile networks, like UMTS as 3G system, while the business branch introduces local radio access networks by means of W-LANs. Conventional mobile networks realise mobile specific functionality, e.g. mobility management or authentication and accounting, by implementing appropriate mechanisms in specific switching nodes (e.g. SGSN in GPRS). In order to exploit the full potential of IP networking solutions a replacement of these mechanisms by IP-based solutions might be appropriate. In addition current and innovative future services in mobile environments require at least soft-guaranteed, differentiated QoS. Therefore the WINE GLASS project investigates and implements enhanced IP-based techniques supporting mobility and QoS in a wireless Internet architecture. As a means to verify the applicability of the implemented solutions, location-aware services deploying both IP-mobility and QoS mechanisms will be implemented and demonstratedPeer ReviewedPostprint (published version

    Nutrapp

    Get PDF
    Treball desenvolupat dins el marc del programa 'European Project Semester'.It is the responsibility of one group from the European Project Semester based in Vilanova i la GeltrĂş to work on a brief supplied by the company Nutrapp, a Spanish based country. The company offers support and guidance in the form of nutritional advice for those that seek it- this mainly being people who suffer from weight issues or have restricted diets among other things. Throughout the duration of the European Project Semester, the team work on designing and programming an application that will enable Nutrapp to prescribe advice to their clients. The initial stages of the project focus on research and learning. Several different research methods are used in the project; it was found that there are a considerable number of similar applications on the market, as a result developing something that is innovative is difficult. At the start of the programming phase, it was deliberated and finally decided that Android Studios would be the chosen tool for the project. The process of the project is clearly outlined, from details on how the project was managed, including time management charts to layout designs and prototyping. The group is inexperienced in this field, had to learn about application design and programming, all of which is included

    Challenges in Transcribing Multimodal Data: A Case Study

    Get PDF
    open2siComputer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS, etc.), has become normalized practice in personal and professional lives, educational initiatives, particularly language teaching and learning, are following suit. For researchers interested in exploring learner interactions in complex technology-supported learning environments, new challenges inevitably emerge. This article looks at the challenges of transcribing and representing multimodal data (visual, oral, and textual) when engaging in computer-assisted language learning research. When transcribing and representing such data, the choices made depend very much on the specific research questions addressed, hence in this paper we explore these challenges through discussion of a specific case study where the researchers were seeking to explore the emergence of identity through interaction in an online, multimodal situated space. Given the limited amount of literature addressing the transcription of online multimodal communication, it is felt that this article is a timely contribution to researchers interested in exploring interaction in CMC language and intercultural learning environments.Cited 10 times as of November 2020 including the prestigious Language Learning Sans Frontiers: A Translanguaging View L Wei, WYJ Ho - Annual Review of Applied Linguistics, 2018 - cambridge.org In this article, we present an analytical approach that focuses on how transnational and translingual learners mobilize their multilingual, multimodal, and multisemiotic repertoires, as well as their learning and work experiences, as resources in language learning. The … Cited by 23 Related articles All 11 versionsopenFrancesca, Helm; Melinda DoolyHelm, Francesca; Melinda, Dool
    • …
    corecore