105 research outputs found

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Development of an intelligent object for grasp and manipulation research

    Get PDF
    KÔiva R, Haschke R, Ritter H. Development of an intelligent object for grasp and manipulation research. Presented at the ICAR 2011, Tallinn, Estonia.In this paper we introduce a novel device, called iObject, which is equipped with tactile and motion tracking sensors that allow for the evaluation of human and robot grasping and manipulation actions. Contact location and contact force, object acceleration in space (6D) and orientation relative to the earth (3D magnetometer) are measured and transmitted wirelessly over a Bluetooth connection. By allowing human-human, human-robot and robot-robot comparisons to be made, iObject is a versatile tool for studying manual interaction. To demonstrate the efficiency and flexibility of iObject for the study of bimanual interactions, we report on a physiological experiment and evaluate the main parameters of the considered dual-handed manipulation task

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch

    Get PDF
    We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only

    Six-degree of freedom device for natural model creation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.Includes bibliographical references (p. 75-78).This thesis presents a novel input device, called SP3X, for the creation of digital models in a semi-immersive environment. The goal of SP3X is to enable novice users to construct geometrically complex three-dimensional objects without extensive training or difficulty. SP3X extends the ideas of mixed reality and partial physical instantiation while building on the foundation of tangible interfaces. The design of the device reflects attention to human physiologic capabilities in manual precision, binocular vision, and reach. The design also considers cost and manufacturability. This thesis presents prior and contributing research from industry, biology, and interfaces in academia. A study investigates the usability of the device and finds that it is functional and easily learned, and identifies several areas for improvement. Finally, a Future Work section is provided to guide researchers pursuing this or similar interfaces. The SP3X project is a result of extensive collaboration with Mahoro Anabuki, a visiting scientist from Canon Development Americas, and could not have been completed without his software or his insight.Richard Henry Whitney, III.S.M

    Interactions gestuelles multi-point et gĂ©omĂ©trie dĂ©formable pour l’édition 3D sur Ă©cran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.MalgrĂ© les progrĂšs en capture d’objets rĂ©els et en gĂ©nĂ©ration procĂ©durale, la crĂ©ation de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thĂšse propose d’exploiter les nouvelles technologies tactiles (Ă©crans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et dĂ©former des objets 3D.En premier lieu, nous Ă©tudions les possibilitĂ© et les limitations gestuelles de la main et des doigts lors d’une interaction sur Ă©cran tactile afin de dĂ©couvrir quels gestes semblent les plus adaptĂ©s Ă  l’édition des environnements et des objets 3D. En particulier, nous Ă©valuons le nombre de degrĂ© de libertĂ© efficaces d’une main humaine lorsque son geste est contraint Ă  une surface plane. Nous proposons Ă©galement une nouvelle mĂ©thode d’analyse gestuelle par phases permettant d’identifier en temps rĂ©el les mouvements clĂ©s de la main et des doigts. Ces rĂ©sultats, combinĂ©s Ă  plusieurs Ă©tudes utilisateur spĂ©cifiques, dĂ©bouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de camĂ©ra), mais aussi placement, rotation et mise Ă  l’échelle des objets. Ce patron est Ă©tendudans un second temps aux dĂ©formations complexes (ajout et suppression de matiĂšre ainsi que courbure ou torsion des objets, avec contrĂŽle de la localitĂ©). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou dĂ©formation) et des tĂąches correspondantes est automatiquement gĂ©rĂ© par le systĂšme en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous Ă©tendons cette interface pour y intĂ©grer des dĂ©formations plus complexe Ă  travers le transfert de vĂȘtements d’un personnage Ă  un autre, qui est Ă©tendu pour permettre la dĂ©formation interactive du vĂȘtement lorsque le personnage qui le porte est dĂ©formĂ© par interaction tactile
    • 

    corecore