7 research outputs found

    Constraint-based technique for haptic volume exploration

    Get PDF
    Journal ArticleWe present a haptic rendering technique that uses directional constraints to facilitate enhanced exploration modes for volumetric datasets. The algorithm restricts user motion in certain directions by incrementally moving a proxy point along the axes of a local reference frame. Reaction forces are generated by a spring coupler between the proxy and the data probe, which can be tuned to the capabilities of the haptic interface. Secondary haptic effects including field forces, friction, and texture can be easily incorporated to convey information about additional characteristics of the data. We illustrate the technique with two examples: displaying fiber orientation in heart muscle layers and exploring diffusion tensor fiber tracts in brain white matter tissue. Initial evaluation of the approach indicates that haptic constraints provide an intuitive means for displaying directional information in volume data

    Haptically assisted connection procedure for the reconstruction of dendritic spines

    Get PDF
    Dendritic spines are thin protrusions that cover the dendritic surface of numerous neurons in the brain and whose function seems to play a key role in neural circuits. The correct segmentation of those structures is difficult due to their small size and the resulting spines can appear incomplete. This paper presents a four-step procedure for the complete reconstruction of dendritic spines. The haptically driven procedure is intended to work as an image processing stage before the automatic segmentation step giving the final representation of the dendritic spines. The procedure is designed to allow both the navigation and the volume image editing to be carried out using a haptic device. A use case employing our procedure together with a commercial software package for the segmentation stage is illustrated. Finally, the haptic editing is evaluated in two experiments; the first experiment concerns the benefits of the force feedback and the second checks the suitability of the use of a haptic device as input. In both cases, the results shows that the procedure improves the editing accuracy

    Combining 3-D geovisualization with force feedback driven user interaction

    Full text link
    We describe a prototype software system for investigating novel human-computer interaction techniques for 3-D geospatial data. This system, M4-Geo (Multi-Modal Mesh Manipulation of Geospatial data), aims to provide a more intuitive interface for directly manipulating 3-D surface data, such as digital terrain models (DTM). The M4-Geo system takes place within a 3-D environment and uses a Phantom haptic force feedback device to enhance 3-D computer graphics with touch-based interactions. The Phantom uses a 3-D force feedback stylus, which acts as a virtual “finger tip ” that allows the user to feel the shape (morphology) of the terrain’s surface in great detail. In addition, it acts as a touch sensitive tool for different GIS tasks, such as digitizing (draping) of lines and polygons directly onto a 3-D surface and directly deforming surfaces (by pushing or pulling the stylus in or out). The user may adjust the properties of the surface deformation (e.g., soft or hard) locally by painting it with a special “material color.” The overlap of visual and force representation of 3-D data aides hand-eye coordination for these tasks and helps the user to perceive the 3-D spatial data in a more holistic, multi-sensory way. The use of such a 3-D force feedback device for direct interaction may thus provide more intuitive and efficient alternatives to the mouse and keyboards driven interactions common today, in particular in areas related to digital landscape design, surface hydrology and geotechnical engineering

    An augmented haptic interface as applied to flow visualization

    Get PDF
    A novel 3D computer interface is proposed in which a physical handle containing force sensors and capable of simulating virtual touch through force feedback is coupled to a variety of virtual tools in a 3D virtual environment. The visual appearance of each tool reflects its capabilities. At one moment a user might feel they are holding a virtual grabber, activated by squeezing, and at another moment they are holding a virtual turntable activated by physical motion of a virtual wheel. In this way it is intended that form and function can be combined so that users rapidly learn the functional capabilities of the tools and retain this learning. It is also intended that the tools be easy to use because of intuitive mappings of forces to actions. A virtual environment is constructed to test this concept, and an evaluation of the interface conducted

    Expressive cutting, deforming, and painting of three-dimensional digital shapes through asymmetric bimanual haptic manipulation

    Get PDF
    Practitioners of the geosciences, design, and engineering disciplines communicate complex ideas about shape by manipulating three-dimensional digital objects to match their conceptual model. However, the two-dimensional control interfaces, common in software applications, create a disconnect to three-dimensional manipulations. This research examines cutting, deforming, and painting manipulations for expressive three-dimensional interaction. It presents a cutting algorithm specialized for planning cuts on a triangle mesh, the extension of a deformation algorithm for inhomogeneous meshes, and the definition of inhomogeneous meshes by painting into a deformation property map. This thesis explores two-handed interactions with haptic force-feedback where each hand can fulfill an asymmetric bimanual role. These digital shape manipulations demonstrate a step toward the creation of expressive three-dimensional interactions

    Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation

    Get PDF
    Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel, cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le confort de l'opérateur lors de la comanipulation.For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot)

    A Constraint-Based Technique for Haptic Volume Exploration

    No full text
    We present a haptic rendering technique that uses directional constraints to facilitate enhanced exploration modes for volumetric datasets. The algorithm restricts user motion in certain directions by incrementally moving a proxy point along the axes of a local reference frame. Reaction forces are generated by a spring coupler between the proxy and the data probe, which can be tuned to the capabilities of the haptic interface. Secondary haptic effects including field forces, friction, and texture can be easily incorporated to convey information about additional characteristics of the data. We illustrate the technique with two examples: displaying fiber orientation in heart muscle layers and exploring diffusion tensor fiber tracts in brain white matter tissue. Initial evaluation of the approach indicates that haptic constraints provide an intuitive means for displaying directional information in volume data
    corecore