22 research outputs found

    Interface Design for Mobile Applications

    Get PDF
    Interface design is arguably one of the most important issues in the development of mobile applications. Mobile users often suffer from the poor interface design that seriously hinders the usability of those mobile applications. The major challenge in the interface design of mobile applications is caused by the unique features of mobile devices, such as small screen size, low resolution, and inefficient data entry methods. Therefore, there is a pressing need of theoretical frameworks or guidelines for designing effective and user-friendly interfaces for mobile applications. Based on a comprehensive literature review, this paper proposes a novel framework for the design of effective mobile interfaces. This framework consists of four major components, namely information presentation, data entry methods, mobile users, and context. We also provide a set of practical interface design guidelines and some insights into what factors should be taken into consideration while designing interfaces for mobile applications

    WAYFINDING AID FOR THE ELDERLY WITH MEMORY DISTURBANCES

    Get PDF
    A global increase in aging population, combined with a growing number of people with dementia, creates new challenges to develop guiding technology for people with memory disturbances in their daily activities. In this study we have tested the prototype of a wayfinding aid using predefined routes. The orientation advice was given through three modalities, visual, audio and tactile signals, two of which were used at a time. Nine subjects, aged 59–90 years (with a median age of 84 years) participated in the user study at a rehabilitation unit in Pyhäjärvi, Finland. Their severity of dementia ranged between mild and severe, and walking abilities ranged from “frail to hobby skier”. In addition, two elderly persons were recruited as control subjects. In most cases, the orientation with the wayfinding aid on predefined routes succeeded, with a few misinterpretations. The most common difficulties included: straying from the defined route, finding the right door, and the attractions of real-life context like other people. The severity of dementia didn’t seem to predict success in orientation with the wayfinding aid. Using the landmarks wasn’t as successful as using “left”, “right” and “go straight on” commands as the wayfinding advice

    Cognitive Principles in Robust Multimodal Interpretation

    Full text link
    Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution). This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy and applies constraints from various sources (e.g., temporal, semantic, and contextual) to resolve references. Our empirical results have shown the advantage of this algorithm in efficiently resolving a variety of user references. Because of its simplicity and generality, this approach has the potential to improve the robustness of multimodal input interpretation

    Ambient Multimodality: an Asset for Developing Universal Access to the Information Society

    Get PDF
    International audienceThe paper tries to point out the benefits that can be derived from research advances in the implementation of concepts such as ambient intelligence (AmI) and ubiquitous or pervasive computing for promoting Universal Access (UA) to the Information Society, that is, for contributing to enable everybody, especially Physically Disabled (PD) people, to have easy access to all computing resources and information services that the coming worldwide Information Society will soon make available to the general public. Following definitions of basic concepts relating to multimodal interaction, the significant contribution of multimodality to developing UA is briefly argued. Then, a short state of the art in AmI research is presented. In the last section we bring out the potential contribution of advances in AmI research and technology to the improvement of computer access for PD people. This claim is supported by the following observations: (i) most projects aiming at implementing AmI focus on the design of new interaction modalities and flexible multimodal user interfaces which may facilitate PD users' computer access ; (ii) targeted applications will support users in a wide range of daily activities which will be performed simultaneously with supporting computing tasks; therefore, users will be placed in contexts where they will be confronted with similar difficulties to those encountered by PD users; (iii) AmI applications being intended for the general public, a wide range of new interaction devices and flexible processing software will be available, making it possible to provide PD users with human-computer facilities tailored to their specific needs at reasonable expense.

    A framework for multi-modal input in a pervasive computing environment

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (leaves 51-53).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.In this thesis, we propose a framework that uses multiple-domains and multi-modal techniques to disambiguate a variety of natural human input modes. This system is based on the input needs of pervasive computing users. The work extends the Galaxy architecture developed by the Spoken Language Systems group at MIT. Just as speech recognition disambiguates an input wave form by using a grammar to find the best matching phrase, we use the same mechanism to disambiguate other input forms, T9 in particular. A skeleton version of the framework was implemented to show this framework is possible and to explore some of the issues that might arise. The system currently works for both T9 and Speech modes. The framework also includes potential for any other type of input for which a recognizer can be built such as graffiti input.by Shalini Agarwal.M.Eng

    Second Workshop on Modelling of Objects, Components and Agents

    Get PDF
    This report contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'02), August 26-27, 2002.The workshop is organized by the 'Coloured Petri Net' Group at the University of Aarhus, Denmark and the 'Theoretical Foundations of Computer Science' Group at the University of Hamburg, Germany. The homepage of the workshop is: http://www.daimi.au.dk/CPnets/workshop02

    Smart Assistive Technology for People with Visual Field Loss

    Get PDF
    Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information

    Socially aware conversational agents

    Get PDF
    corecore