32 research outputs found

    Organic shape modeling through haptic devices

    Get PDF
    This paper presents a sketching system for 3D organic shapes modeling and animation using virtual reality devices. On the hardware side, it is based on the Haptic Workstation™ which conveys force feedback on the user arms (upper body limbs), and a head mounted display to present the 3D generated images. On the software side, we use implicit surfaces modeling techniques such as metaballs. In fact, designers feel well comfortable with this kind of primitives due to their ability in the organic shapes creation such as virtual humans. The proposed system provides an efficient alternative to produce advanced 3D shapes sketchin

    Augmented reality-based visual-haptic modeling for thoracoscopic surgery training systems

    Get PDF
    Background: Compared with traditional thoracotomy, video-assisted thoracoscopic surgery (VATS) has less minor trauma, faster recovery, higher patient compliance, but higher requirements for surgeons. Virtual surgery training simulation systems are important and have been widely used in Europe and America. Augmented reality (AR) in surgical training simulation systems significantly improve the training effect of virtual surgical training, although AR technology is still in its initial stage. Mixed reality has gained increased attention in technology-driven modern medicine but has yet to be used in everyday practice. Methods: This study proposed an immersive AR lobectomy within a thoracoscope surgery training system, using visual and haptic modeling to study the potential benefits of this critical technology. The content included immersive AR visual rendering, based on the cluster-based extended position-based dynamics algorithm of soft tissue physical modeling. Furthermore, we designed an AR haptic rendering systems, whose model architecture consisted of multi-touch interaction points, including kinesthetic and pressure-sensitive points. Finally, based on the above theoretical research, we developed an AR interactive VATS surgical training platform. Results: Twenty-four volunteers were recruited from the First People's Hospital of Yunnan Province to evaluate the VATS training system. Face, content, and construct validation methods were used to assess the tactile sense, visual sense, scene authenticity, and simulator performance. Conclusions: The results of our construction validation demonstrate that the simulator is useful in improving novice and surgical skills that can be retained after a certain period of time. The video-assisted thoracoscopic system based on AR developed in this study is effective and can be used as a training device to assist in the development of thoracoscopic skills for novices

    Virtual reality for assembly methods prototyping: a review

    Get PDF
    Assembly planning and evaluation is an important component of the product design process in which details about how parts of a new product will be put together are formalized. A well designed assembly process should take into account various factors such as optimum assembly time and sequence, tooling and fixture requirements, ergonomics, operator safety, and accessibility, among others. Existing computer-based tools to support virtual assembly either concentrate solely on representation of the geometry of parts and fixtures and evaluation of clearances and tolerances or use simulated human mannequins to approximate human interaction in the assembly process. Virtual reality technology has the potential to support integration of natural human motions into the computer aided assembly planning environment (Ritchie et al. in Proc I MECH E Part B J Eng 213(5):461–474, 1999). This would allow evaluations of an assembler’s ability to manipulate and assemble parts and result in reduced time and cost for product design. This paper provides a review of the research in virtual assembly and categorizes the different approaches. Finally, critical requirements and directions for future research are presented

    Biomechanical Soft Tissue Modeling - Techniques, Implementation and Application

    Get PDF
    The reaction of soft tissue to applied forces can be calculated with biomechanical simulation algorithms. Several modeling approaches exist. A scheme is suggested which allows the classification of arbitrary modeling approaches with respect to the degree of physical realism contained in the model (physical and descriptive models). Besides well known approaches like mass-spring, finite element, particle models and others the ChainMail algorithm is investigated. Where ChainMail in its original formulation lacked the capability to model inhomogeneous material, it is exceptionally stable and converges in one step to a valid configuration. In this thesis ChainMail is generalized to the Enhanced ChainMail algorithm which is capable to model inhomogeneous, volumetric objects and is fast enough for real time simulations. While now in principle being able to simulate and visualize an object in real time, a software architecture is required to team up simulation and visualization. As visualization and simulation have so far evolved independently, they work with different data structures. Multiplicity of data representations leads to the problems of data consistency and high memory consumption. A software architecture is developed which provides a universal data structure for several simulation and visualization approaches. The versatility of the developed architecture is demonstrated by two medical simulations. The first is the simulation of an intra-ocular surgery, which makes heavy use of Virtual Reality techniques. Designed as a training and educational tool the simulator EyeSi relies on descriptive real time ti me tissue simulation and visualization. The second deals with the simulation of decompressive craniotomy. The medical problem requires a physical model as the project's goal is to provide exact predictions on tissue behavior to support surgeons in surgery planning

    Essential techniques for laparoscopic surgery simulation

    Get PDF
    Laparoscopic surgery is a complex minimum invasive operation that requires long learning curve for the new trainees to have adequate experience to become a qualified surgeon. With the development of virtual reality technology, virtual reality-based surgery simulation is playing an increasingly important role in the surgery training. The simulation of laparoscopic surgery is challenging because it involves large non-linear soft tissue deformation, frequent surgical tool interaction and complex anatomical environment. Current researches mostly focus on very specific topics (such as deformation and collision detection) rather than a consistent and efficient framework. The direct use of the existing methods cannot achieve high visual/haptic quality and a satisfactory refreshing rate at the same time, especially for complex surgery simulation. In this paper, we proposed a set of tailored key technologies for laparoscopic surgery simulation, ranging from the simulation of soft tissues with different properties, to the interactions between surgical tools and soft tissues to the rendering of complex anatomical environment. Compared with the current methods, our tailored algorithms aimed at improving the performance from accuracy, stability and efficiency perspectives. We also abstract and design a set of intuitive parameters that can provide developers with high flexibility to develop their own simulators

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    A biomechanics-based articulation model for medical applications

    Get PDF
    Computer Graphics came into the medical world especially after the arrival of 3D medical imaging. Computer Graphics techniques are already integrated in the diagnosis procedure by means of the visual tridimensional analysis of computer tomography, magnetic resonance and even ultrasound data. The representations they provide, nevertheless, are static pictures of the patients' body, lacking in functional information. We believe that the next step in computer assisted diagnosis and surgery planning depends on the development of functional 3D models of human body. It is in this context that we propose a model of articulations based on biomechanics. Such model is able to simulate the joint functionality in order to allow for a number of medical applications. It was developed focusing on the following requirements: it must be at the same time simple enough to be implemented on computer, and realistic enough to allow for medical applications; it must be visual in order for applications to be able to explore the joint in a 3D simulation environment. Then, we propose to combine kinematical motion for the parts that can be considered as rigid, such as bones, and physical simulation of the soft tissues. We also deal with the interaction between the different elements of the joint, and for that we propose a specific contact management model. Our kinematical skeleton is based on anatomy. Special considerations have been taken to include anatomical features like axis displacements, range of motion control, and joints coupling. Once a 3D model of the skeleton is built, it can be simulated by data coming from motion capture or can be specified by a specialist, a clinician for instance. Our deformation model is an extension of the classical mass-spring systems. A spherical volume is considered around mass points, and mechanical properties of real materials can be used to parameterize the model. Viscoelasticity, anisotropy and non-linearity of the tissues are simulated. We particularly proposed a method to configure the mass-spring matrix such that the objects behave according to a predefined Young's modulus. A contact management model is also proposed to deal with the geometric interactions between the elements inside the joint. After having tested several approaches, we proposed a new method for collision detection which measures in constant time the signed distance to the closest point for each point of two meshes subject to collide. We also proposed a method for collision response which acts directly on the surfaces geometry, in a way that the physical behavior relies on the propagation of reaction forces produced inside the tissue. Finally, we proposed a 3D model of a joint combining the three elements: anatomical skeleton motion, biomechanical soft tissues deformation, and contact management. On the top of that we built a virtual hip joint and implemented a set of medical applications prototypes. Such applications allow for assessment of stress distribution on the articular surfaces, range of motion estimation based on ligament constraint, ligament elasticity estimation from clinically measured range of motion, and pre- and post-operative evaluation of stress distribution. Although our model provides physicians with a number of useful variables for diagnosis and surgery planning, it should be improved for effective clinical use. Validation has been done partially. However, a global clinical validation is necessary. Patient specific data are still difficult to obtain, especially individualized mechanical properties of tissues. The characterization of material properties in our soft tissues model can also be improved by including control over the shear modulus

    Shared control for natural motion and safety in hands-on robotic surgery

    Get PDF
    Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces

    Simulation Method for the Physical Deformation of a Three-Dimensional Soft Body in Augmented Reality-Based External Ventricular Drainage

    Get PDF
    Objectives Intraoperative navigation reduces the risk of major complications and increases the likelihood of optimal surgical outcomes. This paper presents an augmented reality (AR)-based simulation technique for ventriculostomy that visualizes brain deformations caused by the movements of a surgical instrument in a three-dimensional brain model. This is achieved by utilizing a position-based dynamics (PBD) physical deformation method on a preoperative brain image. Methods An infrared camera-based AR surgical environment aligns the real-world space with a virtual space and tracks the surgical instruments. For a realistic representation and reduced simulation computation load, a hybrid geometric model is employed, which combines a high-resolution mesh model and a multiresolution tetrahedron model. Collision handling is executed when a collision between the brain and surgical instrument is detected. Constraints are used to preserve the properties of the soft body and ensure stable deformation. Results The experiment was conducted once in a phantom environment and once in an actual surgical environment. The tasks of inserting the surgical instrument into the ventricle using only the navigation information presented through the smart glasses and verifying the drainage of cerebrospinal fluid were evaluated. These tasks were successfully completed, as indicated by the drainage, and the deformation simulation speed averaged 18.78 fps. Conclusions This experiment confirmed that the AR-based method for external ventricular drain surgery was beneficial to clinicians
    corecore