37,816 research outputs found

    Effects of See Through Interfaces on User Acceptance of Small Screen Information Systems

    Get PDF
    Small-screen devices such as mobile phones are increasingly pervasive. Reduced screen areas compromise the ease-of-use of such devices, and consequently, a concern for system designers becomes the maximization of available screen space. On large-screen displays, menus can overlap and obscure others, and be displayed simultaneously to the user. This is generally not the case with small screens: where a user selects from an on-screen menu, that menu must ‘vacate’ the screen before another appears. Menu translucency, where a user can see through an on-screen menu to displayed elements beneath, is a possible solution to small-screen display maximization. Based on experimental evidence with 70 participants, and using an extended Technology Acceptance Model (TAM) this research examines the effect of on-screen translucent menus on perceptions of ease-ofuse, usefulness, and enjoyment for a third generation mobile phone prototype user interface. We offer explanations for our findings and discuss implications for practitioners and researchers

    Oral messages improve visual search

    Get PDF
    Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.Comment: 4 page

    PainDroid: An android-based virtual reality application for pain assessment

    Get PDF
    Earlier studies in the field of pain research suggest that little efficient intervention currently exists in response to the exponential increase in the prevalence of pain. In this paper, we present an Android application (PainDroid) with multimodal functionality that could be enhanced with Virtual Reality (VR) technology, which has been designed for the purpose of improving the assessment of this notoriously difficult medical concern. Pain- Droid has been evaluated for its usability and acceptability with a pilot group of potential users and clinicians, with initial results suggesting that it can be an effective and usable tool for improving the assessment of pain. Participant experiences indicated that the application was easy to use and the potential of the application was similarly appreciated by the clinicians involved in the evaluation. Our findings may be of considerable interest to healthcare providers, policy makers, and other parties that might be actively involved in the area of pain and VR research

    User-centered design of a dynamic-autonomy remote interaction concept for manipulation-capable robots to assist elderly people in the home

    Get PDF
    In this article, we describe the development of a human-robot interaction concept for service robots to assist elderly people in the home with physical tasks. Our approach is based on the insight that robots are not yet able to handle all tasks autonomously with sufficient reliability in the complex and heterogeneous environments of private homes. We therefore employ remote human operators to assist on tasks a robot cannot handle completely autonomously. Our development methodology was user-centric and iterative, with six user studies carried out at various stages involving a total of 241 participants. The concept is under implementation on the Care-O-bot 3 robotic platform. The main contributions of this article are (1) the results of a survey in form of a ranking of the demands of elderly people and informal caregivers for a range of 25 robot services, (2) the results of an ethnography investigating the suitability of emergency teleassistance and telemedical centers for incorporating robotic teleassistance, and (3) a user-validated human-robot interaction concept with three user roles and corresponding three user interfaces designed as a solution to the problem of engineering reliable service robots for home environments

    Bringing tabletop technologies to kindergarten children

    Get PDF
    Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype

    Integration of an adaptive infotainment system in a vehicle and validation in real driving scenarios

    Get PDF
    More services, functionalities, and interfaces are increasingly being incorporated into current vehicles and may overload the driver capacity to perform primary driving tasks adequately. For this reason, a strategy for easing driver interaction with the infotainment system must be defined, and a good balance between road safety and driver experience must also be achieved. An adaptive Human Machine Interface (HMI) that manages the presentation of information and restricts drivers’ interaction in accordance with the driving complexity was designed and evaluated. For this purpose, the driving complexity value employed as a reference was computed by a predictive model, and the adaptive interface was designed following a set of proposed HMI principles. The system was validated performing acceptance and usability tests in real driving scenarios. Results showed the system performs well in real driving scenarios. Also, positive feedbacks were received from participants endorsing the benefits of integrating this kind of system as regards driving experience and road safety.Postprint (published version
    • 

    corecore