4,556 research outputs found

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    Evaluating rules of interaction for object manipulation in cluttered virtual environments

    Get PDF
    A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as "the piano mover's problem." This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an "over-the-shoulder" perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments

    Get PDF
    Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object were integrated together either symmetrically or asymmetrically. The former only allowed the common component of participants' actions to take place, but the latter used the mean. Symmetric action integration was superior for sections of the task when both participants had to perform similar actions, but if participants had to move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the reduction in performance caused by having to cooperate with another person)

    Haptic interfaces: Hardware, software and human performance

    Get PDF
    Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed

    Grounding action in visuo-haptic space using experience networks

    Get PDF
    Traditional approaches to the use of machine learning algorithms do not provide a method to learn multiple tasks in one-shot on an embodied robot. It is proposed that grounding actions within the sensory space leads to the development of action-state relationships which can be re-used despite a change in task. A novel approach called an Experience Network is developed and assessed on a real-world robot required to perform three separate tasks. After grounded representations were developed in the initial task, only minimal further learning was required to perform the second and third task

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
    • …
    corecore