25,968 research outputs found

    Evaluating the development of wearable devices, personal data assistants and the use of other mobile devices in further and higher education institutions

    Get PDF
    This report presents technical evaluation and case studies of the use of wearable and mobile computing mobile devices in further and higher education. The first section provides technical evaluation of the current state of the art in wearable and mobile technologies and reviews several innovative wearable products that have been developed in recent years. The second section examines three scenarios for further and higher education where wearable and mobile devices are currently being used. The three scenarios include: (i) the delivery of lectures over mobile devices, (ii) the augmentation of the physical campus with a virtual and mobile component, and (iii) the use of PDAs and mobile devices in field studies. The first scenario explores the use of web lectures including an evaluation of IBM's Web Lecture Services and 3Com's learning assistant. The second scenario explores models for a campus without walls evaluating the Handsprings to Learning projects at East Carolina University and ActiveCampus at the University of California San Diego . The third scenario explores the use of wearable and mobile devices for field trips examining San Francisco Exploratorium's tool for capturing museum visits and the Cybertracker field computer. The third section of the report explores the uses and purposes for wearable and mobile devices in tertiary education, identifying key trends and issues to be considered when piloting the use of these devices in educational contexts

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Supporting medical ward rounds through mobile task and process management

    Get PDF
    In a hospital, ward rounds are crucial for task coordination and decision-making. In the course of knowledge-intensive patient treatment processes, it should be possible to quickly define tasks and to assign them to clinicians in a flexible manner. In current practice, however, task management is not properly supported. During a ward round, emerging tasks are jotted down using pen and paper and their processing is prone to errors. In particular, staff members must manually keep track of the status of their tasks. To relieve them from such a manual task management, we introduce the MedicalDo (MEDo) approach. It transforms the pen and paper worksheet to a digital user interface on a mobile device. Thereby, MEDo integrates process support, task management, and access to the patient record. Interviews of medical staff members have revealed that they crave for a mobile process and task support. This has been further confirmed in a case study we conducted in four different wards. Finally, in user experiments, we have demonstrated that MEDo puts task acquisition on a level comparable to that of pen and paper. Overall, MEDo enables users to create, monitor and share medical tasks based on a mobile and user-friendly platform

    Personalization framework for adaptive robotic feeding assistance

    Get PDF
    The final publication is available at link.springer.comThe deployment of robots at home must involve robots with pre-defined skills and the capability of personalizing their behavior by non-expert users. A framework to tackle this personalization is presented and applied to an automatic feeding task. The personalization involves the caregiver providing several examples of feeding using Learning-by- Demostration, and a ProMP formalism to compute an overall trajectory and the variance along the path. Experiments show the validity of the approach in generating different feeding motions to adapt to user’s preferences, automatically extracting the relevant task parameters. The importance of the nature of the demonstrations is also assessed, and two training strategies are compared. © Springer International Publishing AG 2016.Peer ReviewedPostprint (author's final draft

    Understanding Interactions for Smart Wheelchair Navigation in Crowds

    Get PDF

    Evaluation of a context-aware voice interface for Ambient Assisted Living: qualitative user study vs. quantitative system evaluation

    No full text
    International audienceThis paper presents an experiment with seniors and people with visual impairment in a voice-controlled smart home using the SWEET-HOME system. The experiment shows some weaknesses in automatic speech recognition which must be addressed, as well as the need of better adaptation to the user and the environment. Indeed, users were disturbed by the rigid structure of the grammar and were eager to adapt it to their own preferences. Surprisingly, while no humanoid aspect was introduced in the system, the senior participants were inclined to embody the system. Despite these aspects to improve, the system has been favourably assessed as diminishing most participant fears related to the loss of autonomy
    • 

    corecore