39,536 research outputs found

    The design-by-adaptation approach to universal access: learning from videogame technology

    Get PDF
    This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    The orienting mouse: An input device with attitude

    Get PDF
    This paper presents a modified computer mouse, the Orienting Mouse, which delivers orientation as an additional dimension of input; when the mouse is moved on a flat surface it reports, in addition to the conventional x, y translation, angular rotation of the device in the x, y plane. The orienting mouse preserves important properties of the standard mouse; all measurements are relative and movement is tracked only while the mouse is on its flat surface. If the user lets go of the mouse, leaving it on the surface, its position and orientation do not change until it is touched again. Picking the mouse up and putting it down in a different orientation leaves the angle and position unchanged. While the concept of sensing mouse rotation is not new, our work focuses on movement and navigation in 3D, rather than on precision positioning tasks. We describe a number of sample applications developed to test its effectiveness in this context. Specific features exploited and described include (i) an algorithm for calculating the mouse angle which cancels drift between the two sensors, and (ii) the use of angular gearing which avoids unnatural and uncomfortable hand positions when moving through large angles; informal user testing validates this idea

    Development of system supervision and control software for a micromanipulation system

    Get PDF
    This paper presents the realization of a modular software architecture that is capable of handling the complex supervision structure of a multi degree of freedom open architecture and reconfigurable micro assembly workstation. This software architecture initially developed for a micro assembly workstation is later structured to form a framework and design guidelines for precise motion control and system supervision tasks explained subsequently through an application on a micro assembly workstation. The software is separated by design into two different layers, one for real-time and the other for non-realtime. These two layers are composed of functional modules that form the building blocks for the precise motion control and the system supervision of complex mechatronics systems

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces

    Full text link
    Graph exploration and editing are still mostly considered independently and systems to work with are not designed for todays interactive surfaces like smartphones, tablets or tabletops. When developing a system for those modern devices that supports both graph exploration and graph editing, it is necessary to 1) identify what basic tasks need to be supported, 2) what interactions can be used, and 3) how to map these tasks and interactions. This technical report provides a list of basic interaction tasks for graph exploration and editing as a result of an extensive system review. Moreover, different interaction modalities of interactive surfaces are reviewed according to their interaction vocabulary and further degrees of freedom that can be used to make interactions distinguishable are discussed. Beyond the scope of graph exploration and editing, we provide an approach for finding and evaluating a mapping from tasks to interactions, that is generally applicable. Thus, this work acts as a guideline for developing a system for graph exploration and editing that is specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Comparing two haptic interfaces for multimodal graph rendering

    Get PDF
    This paper describes the evaluation of two multimodal interfaces designed to provide visually impaired people with access to various types of graphs. The interfaces consist of audio and haptics which is rendered on commercially available force feedback devices. This study compares the usability of two force feedback devices: the SensAble PHANToM and the Logitech WingMan force feedback mouse in representing graphical data. The type of graph used in the experiment is the bar chart under two experimental conditions: single mode and multimodal. The results show that PHANToM provides better performance in the haptic only condition. However, no significant difference has been found between the two devices in the multimodal condition. This has confirmed the advantages of using multimodal approach in our research and that low-cost haptic devices can be successful. This paper introduces our evaluation approach and discusses the findings of the experiment
    corecore