7,892 research outputs found

    Pickup usability dominates: a brief history of mobile text entry research and adoption

    Get PDF
    Text entry on mobile devices (e.g. phones and PDAs) has been a research challenge since devices shrank below laptop size: mobile devices are simply too small to have a traditional full-size keyboard. There has been a profusion of research into text entry techniques for smaller keyboards and touch screens: some of which have become mainstream, while others have not lived up to early expectations. As the mobile phone industry moves to mainstream touch screen interaction we will review the range of input techniques for mobiles, together with evaluations that have taken place to assess their validity: from theoretical modelling through to formal usability experiments. We also report initial results on iPhone text entry speed

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Gesture Based Control of Semi-Autonomous Vehicles

    Get PDF
    The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles, such as quadcopters, using realistic, physics based simulations. This involves identifying natural gestures to control basic functions of a vehicle, such as maneuvering and onboard equipment operation, and building simulations using the Unity game engine to investigate preferred use of those gestures. In addition to creating a realistic operating experience, human factors associated with limitations on physical hand motion and information management are also considered in the simulation development process. Testing with external participants using a recreational quadcopter simulation built in Unity was conducted to assess the suitability of the simulation and preferences between a joystick approach and the gesture-based approach. Initial feedback indicated that the simulation represented the actual vehicle performance well and that the joystick is preferred over the gesture-based approach. Improvements in the gesture-based control are documented as additional features in the simulation, such as basic maneuver training and additional vehicle positioning information, are added to assist the user to better learn the gesture-based interface and implementation of active control concepts to interpret and apply vehicle forces and torques. Tests were also conducted with an actual ground vehicle to investigate if knowledge and skill from the simulated environment transfers to a real-life scenario. To assess this, an immersive virtual reality (VR) simulation was built in Unity as a training environment to learn how to control a remote control car using gestures. This was then followed by a control of the actual ground vehicle. Observations and participant feedback indicated that range of hand movement and hand positions transferred well to the actual demonstration. This illustrated that the VR simulation environment provides a suitable learning experience, and an environment from which to assess human performance; thus, also validating the observations from earlier tests. Overall results indicate that the gesture-based approach holds promise given the emergence of new technology, but additional work needs to be pursued. This includes algorithms to process gesture data to provide more stable and precise vehicle commands and training environments to familiarize users with this new interface concept

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Usability and Feasibility of PIERS on the Move: An mHealth App for Pre-Eclampsia Triage.

    Get PDF
    BACKGROUND: Pre-eclampsia is one of the leading causes of maternal death and morbidity in low-resource countries due to delays in case identification and a shortage of health workers trained to manage the disorder. Pre-eclampsia Integrated Estimate of RiSk (PIERS) on the Move (PotM) is a low cost, easy-to-use, mobile health (mHealth) platform that has been created to aid health workers in making decisions around the management of hypertensive pregnant women. PotM combines two previously successful innovations into a mHealth app: the miniPIERS risk assessment model and the Phone Oximeter. OBJECTIVE: The aim of this study was to assess the usability of PotM (with mid-level health workers) for iteratively refining the system. METHODS: Development of the PotM user interface involved usability testing with target end-users in South Africa. Users were asked to complete clinical scenario tasks, speaking aloud to give feedback on the interface and then complete a questionnaire. The tool was then evaluated in a pilot clinical evaluation in Tygerberg Hospital, Cape Town. RESULTS: After ethical approval and informed consent, 37 nurses and midwives evaluated the tool. During Study 1, major issues in the functionality of the touch-screen keyboard and date scroll wheels were identified (total errors n=212); during Study 2 major improvements in navigation of the app were suggested (total errors n=144). Overall, users felt the app was usable using the Computer Systems Usability Questionnaire; median (range) values for Study 1 = 2 (1-6) and Study 2 = 1 (1-7). To demonstrate feasibility, PotM was used by one research nurse for the pilot clinical study. In total, more than 500 evaluations were performed on more than 200 patients. The median (interquartile range) time to complete an evaluation was 4 min 55 sec (3 min 25 sec to 6 min 56 sec). CONCLUSIONS: By including target end-users in the design and evaluation of PotM, we have developed an app that can be easily integrated into health care settings in low- and middle-income countries. Usability problems were often related to mobile phone features (eg, scroll wheels, touch screen use). Larger scale evaluation of the clinical impact of this tool is underway

    A survey of haptics in serious gaming

    Get PDF
    Serious gaming often requires high level of realism for training and learning purposes. Haptic technology has been proved to be useful in many applications with an additional perception modality complementary to the audio and the vision. It provides novel user experience to enhance the immersion of virtual reality with a physical control-layer. This survey focuses on the haptic technology and its applications in serious gaming. Several categories of related applications are listed and discussed in details, primarily on haptics acts as cognitive aux and main component in serious games design. We categorize haptic devices into tactile, force feedback and hybrid ones to suit different haptic interfaces, followed by description of common haptic gadgets in gaming. Haptic modeling methods, in particular, available SDKs or libraries either for commercial or academic usage, are summarized. We also analyze the existing research difficulties and technology bottleneck with haptics and foresee the future research directions

    Towards a multimodal interaction space: Categorisation and applications

    Get PDF
    Based on many experiences of developing interactive systems by the authors, a framework for the description and analysis of interaction has been developed. The dimensions of this multimodal interaction space have been identified as sensory modalities, modes and levels of interaction. To illustrate and validate this framework, development of multimodal interaction styles is carried out and interactions in the real world are studied, going from theory to practice and back again. The paper describes the framework and two recent projects, one in the field of interactive architecture and another in the field of multimodal HCI research. Both projects use multiple modalities for interaction, particularly movement based interaction styles. © Springer-Verlag London Limited 2007

    SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation

    Full text link
    We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness, so that pairs of entities that are associated but not actually similar [Freud, psychology] have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun and verb pairs, together with an independent rating of concreteness and (free) association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-of-the-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures

    Multimodal interface for an intelligent wheelchair

    Get PDF
    Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Automação). Faculdade de Engenharia. Universidade do Porto. 200
    • …
    corecore