113 research outputs found

    Continuous Surface Rendering, Passing from CAD to Physical Representation

    Get PDF
    This paper describes a desktop‐mechatronic interface that has been conceived to support designers in the evaluation of aesthetic virtual shapes. This device allows a continuous and smooth free hand contact interaction on a real and developable plastic tape actuated by a servo‐controlled mechanism. The objective in designing this device is to reproduce a virtual surface with a consistent physical rendering well adapted to designers' needs. The desktop‐mechatronic interface consists in a servo‐actuated plastic strip that has been devised and implemented using seven interpolation points. In fact, by using the MEC (Minimal Energy Curve) Spline approach, a developable real surface is rendered taking into account the CAD geometry of the virtual shapes. In this paper, we describe the working principles of the interface by using both absolute and relative approaches to control the position on each single control point on the MEC spline. Then, we describe the methodology that has been implemented, passing from the CAD geometry, linked to VisualNastran in order to maintain the parametric properties of the virtual shape. Then, we present the co‐ simulation between VisualNastran and MATLAB/Simulink used for achieving this goal and controlling the system and finally, we present the results of the subsequent testing session specifically carried out to evaluate the accuracy and the effectiveness of the mechatronic device

    Image-Based Force Estimation and Haptic Rendering For Robot-Assisted Cardiovascular Intervention

    Get PDF
    Clinical studies have indicated that the loss of haptic perception is the prime limitation of robot-assisted cardiovascular intervention technology, hindering its global adoption. It causes compromised situational awareness for the surgeon during the intervention and may lead to health risks for the patients. This doctoral research was aimed at developing technology for addressing the limitation of the robot-assisted intervention technology in the provision of haptic feedback. The literature review showed that sensor-free force estimation (haptic cue) on endovascular devices, intuitive surgeon interface design, and haptic rendering within the surgeon interface were the major knowledge gaps. For sensor-free force estimation, first, an image-based force estimation methods based on inverse finite-element methods (iFEM) was developed and validated. Next, to address the limitation of the iFEM method in real-time performance, an inverse Cosserat rod model (iCORD) with a computationally efficient solution for endovascular devices was developed and validated. Afterward, the iCORD was adopted for analytical tip force estimation on steerable catheters. The experimental studies confirmed the accuracy and real-time performance of the iCORD for sensor-free force estimation. Afterward, a wearable drift-free rotation measurement device (MiCarp) was developed to facilitate the design of an intuitive surgeon interface by decoupling the rotation measurement from the insertion measurement. The validation studies showed that MiCarp had a superior performance for spatial rotation measurement compared to other modalities. In the end, a novel haptic feedback system based on smart magnetoelastic elastomers was developed, analytically modeled, and experimentally validated. The proposed haptics-enabled surgeon module had an unbounded workspace for interventional tasks and provided an intuitive interface. Experimental validation, at component and system levels, confirmed the usability of the proposed methods for robot-assisted intervention systems

    Curve and surface framing for scientific visualization and domain dependent navigation

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1996Curves and surfaces are two of the most fundamental types of objects in computer graphics. Most existing systems use only the 3D positions of the curves and surfaces, and the 3D normal directions of the surfaces, in the visualization process. In this dissertation, we attach moving coordinate frames to curves and surfaces, and explore several applications of these frames in computer graphics and scientific visualization. Curves in space are difficult to perceive and analyze, especially when they are densely clustered, as is typical in computational fluid dynamics and volume deformation applications. Coordinate frames are useful for exposing the similarities and differences between curves. They are also useful for constructing ribbons, tubes and smooth camera orientations along curves. In many 3D systems, users interactively move the camera around the objects with a mouse or other device. But all the camera control is done independently of the properties of the objects being viewed, as if the user is flying freely in space. This type of domain-independent navigation is frequently inappropriate in visualization applications and is sometimes quite difficult for the user to control. Another productive approach is to look at domain-specific constraints and thus to create a new class of navigation strategies. Based on attached frames on surfaces, we can constrain the camera gaze direction to be always parallel (or at a fixed angle) to the surface normal. Then users will get a feeling of driving on the object instead of flying through the space. The user's mental model of the environment being visualized can be greatly enhanced by the use of these constraints in the interactive interface. Many of our research ideas have been implemented in Mesh View, an interactive system for viewing and manipulating geometric objects. It contains a general purpose C++ library for nD geometry and supports a winged-edge based data structure. Dozens of examples of scientifically interesting surfaces have been constructed and included with the system

    Modeling and Force Estimation of Cardiac Catheters for Haptics-enabled Tele-intervention

    Get PDF
    Robot-assisted cardiovascular intervention (RCI) systems have shown success in reducing the x-ray exposure to surgeons and patients during cardiovascular interventional procedures. RCI systems typically are teleoperated systems with leader-follower architecture. With such system architecture, the surgeon is placed out of the x-ray exposure zone and uses a console to control the robot remotely. Despite its success in reducing x-ray exposure, clinicians have identified the lack of force feedback as to its main technological limitation that can lead to vascular perforation of the patient’s vessels and even their death. The objective of this thesis was to develop, verify, and validate mechatronics technology for real-time accurate and robust haptic feedback rendering for RCI systems. To attain the thesis objective, first, a thorough review of the state-of-the-art clinical requirements, modeling approaches and methods, and current knowledge gaps for the provision of force feedback for RCI systems was performed. Afterward, a real-time tip force estimation method based on image-based shape-sensing and learning-from-simulation was developed and validated. The learning-based model was fairly accurate but required a large database for training which was computationally expensive. Next, a new mechanistic model, i.e., finite arc method (FAM) for soft robots was proposed, formulated, solved, and validated that allowed for fast and accurate modeling of catheter deformation. With FAM, the required training database for the proposed learning-from-simulation method would be generated with high speed and accuracy. In the end, to robustly relay the estimated forces from real-time imaging from the follower robot to the leader haptic device, a novel impedance-based force feedback rendering modality was proposed and implemented on a representative teleoperated RCI system for experimental validation. The proposed method was compared with the classical direct force reflection method and showed enhanced stability, robustness, and accuracy in the presence of communication disruption. The results of this thesis showed that the performance of the proposed integrated force feedback rendering system was in fair compliance with the clinical requirements and had superior robustness compared to the classical direct force reflection method

    Interactive freeform editing techniques for large-scale, multiresolution level set models

    Get PDF
    Level set methods provide a volumetric implicit surface representation with automatic smooth blending properties and no self-intersections. They can handle arbitrary topology changes easily, and the volumetric implicit representation does not require the surface to be re-adjusted after extreme deformations. Even though they have found some use in movie productions and some medical applications, level set models are not highly utilized in either special effects industry or medical science. Lack of interactive modeling tools makes working with level set models difficult for people in these application areas.This dissertation describes techniques and algorithms for interactive freeform editing of large-scale, multiresolution level set models. Algorithms are developed to map intuitive user interactions into level set speed functions producing specific, desired surface movements. Data structures for efficient representation of very high resolution volume datasets and associated algorithms for rapid access and processing of the information within the data structures are explained. A hierarchical, multiresolution representation of level set models that allows for rapid decomposition and reconstruction of the complete full-resolution model is created for an editing framework that allows level-of-detail editing. We have developed a framework that identifies surface details prior to editing and introduces them back afterwards. Combining these two features provides a detail-preserving level set editing capability that may be used for multi-resolution modeling and texture transfer. Given the complex data structures that are required to represent large-scale, multiresolution level set models and the compute-intensive numerical methods to evaluate them, optimization techniques and algorithms have been developed to evaluate and display the dynamic isosurface embedded in the volumetric data.Ph.D., Computer Science -- Drexel University, 201

    Fusing Multimedia Data Into Dynamic Virtual Environments

    Get PDF
    In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotemporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identified several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education

    Accurate geometry reconstruction of vascular structures using implicit splines

    Get PDF
    3-D visualization of blood vessel from standard medical datasets (e.g. CT or MRI) play an important role in many clinical situations, including the diagnosis of vessel stenosis, virtual angioscopy, vascular surgery planning and computer aided vascular surgery. However, unlike other human organs, the vasculature system is a very complex network of vessel, which makes it a very challenging task to perform its 3-D visualization. Conventional techniques of medical volume data visualization are in general not well-suited for the above-mentioned tasks. This problem can be solved by reconstructing vascular geometry. Although various methods have been proposed for reconstructing vascular structures, most of these approaches are model-based, and are usually too ideal to correctly represent the actual variation presented by the cross-sections of a vascular structure. In addition, the underlying shape is usually expressed as polygonal meshes or in parametric forms, which is very inconvenient for implementing ramification of branching. As a result, the reconstructed geometries are not suitable for computer aided diagnosis and computer guided minimally invasive vascular surgery. In this research, we develop a set of techniques associated with the geometry reconstruction of vasculatures, including segmentation, modelling, reconstruction, exploration and rendering of vascular structures. The reconstructed geometry can not only help to greatly enhance the visual quality of 3-D vascular structures, but also provide an actual geometric representation of vasculatures, which can provide various benefits. The key findings of this research are as follows: 1. A localized hybrid level-set method of segmentation has been developed to extract the vascular structures from 3-D medical datasets. 2. A skeleton-based implicit modelling technique has been proposed and applied to the reconstruction of vasculatures, which can achieve an accurate geometric reconstruction of the vascular structures as implicit surfaces in an analytical form. 3. An accelerating technique using modern GPU (Graphics Processing Unit) is devised and applied to rendering the implicitly represented vasculatures. 4. The implicitly modelled vasculature is investigated for the application of virtual angioscopy

    Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation

    Get PDF
    Pendant trĂšs longtemps, l'automatisation a Ă©tĂ© assujettie Ă  l'usage de robots industriels traditionnels placĂ©s dans des cages et programmĂ©s pour rĂ©pĂ©ter des tĂąches plus ou moins complexes au maximum de leur vitesse et de leur prĂ©cision. Cette automatisation, dite rigide, possĂšde deux inconvĂ©nients majeurs : elle est chronophage dĂ» aux contraintes contextuelles applicatives et proscrit la prĂ©sence humaine. Il existe dĂ©sormais une nouvelle gĂ©nĂ©ration de robots avec des systĂšmes moins encombrants, peu coĂ»teux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsĂšquement sĂ»rs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut ĂȘtre inclus dans la boucle comme un agent dĂ©cisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus dĂ©cisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opĂ©rateurs. Cependant, la dĂ©finition d'un guide virtuel nĂ©cessite souvent une expertise et une modĂ©lisation prĂ©cise de la tĂąche. Cela restreint leur utilitĂ© aux scĂ©narios Ă  contraintes fixes. Pour palier ce problĂšme et amĂ©liorer la flexibilitĂ© de la programmation du guide virtuel, cette thĂšse prĂ©sente une nouvelle approche par dĂ©monstration : nous faisons usage de l'apprentissage kinesthĂ©sique de façon itĂ©rative et construisons le guide virtuel avec une spline 6D. GrĂące Ă  cette approche, l'opĂ©rateur peut modifier itĂ©rativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de rĂ©duire la pĂ©nibilitĂ©. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut dĂ©placer un point clĂ© cartĂ©sien ou modifier une portion entiĂšre du guide avec une nouvelle dĂ©monstration partielle. Nous avons Ă©galement Ă©tendu notre approche aux guides virtuels 6D, oĂč les splines en dĂ©placement sont dĂ©finies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opĂ©rateur peut initialement dĂ©finir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la dĂ©monstration de l'orientation. Nous avons appliquĂ© notre approche dans deux scĂ©narios industriels utilisant un cobot. Nous avons ainsi dĂ©montrĂ© l'intĂ©rĂȘt de notre mĂ©thode qui amĂ©liore le confort de l'opĂ©rateur lors de la comanipulation.For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot)
    • 

    corecore