8 research outputs found

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Real-Time Virtual Viewpoint Generation on the GPU for Scene Navigation

    Full text link

    Image-based procedural texture matching and transformation

    Get PDF
    In this thesis, we present an approach to finding a procedural representation of a texture to replicate a given texture image which we call image-based procedural texture matching. Procedural representations are frequently used for many aspects of computer generated imagery, however, the ability to use procedural textures is limited by the difficulty inherent in finding a suitable procedural representation to match a desired texture. More importantly, the process of determining an appropriate set of parameters necessary to approximate the sample texture is a difficult task for a graphic artist.The textural characteristics of many real world objects change over time, so we are therefore interested in how textured objects in a graphical animation could also be made to change automatically. We would like this automatic texture transformation to be based on different texture samples in a time-dependant manner. This notion, which is a natural extension of procedural texture matching, involves the creation of a smoothly varying sequence of texture images, while allowing the graphic artist to control various characteristics of the texture sequence.Given a library of procedural textures, our approach uses a perceptually motivated texture similarity measure to identify which procedural textures in the library may produce a suitable match. Our work assumes that at least one procedural texture in the library is capable of approximating the desired texture. Because exhaustive search of all of the parameter combinations for each procedural texture is not computationally feasible, we perform a two-stage search on the candidate procedural textures. First, a global search is performed over pre-computed samples from the given procedural texture to locate promising parameter settings. Secondly, these parameter settings are optimised using a local search method to refine the match to the desired texture.The characteristics of a procedural texture generally do not vary uniformly for uniform parameter changes. That is, in some areas of the parameter domain of a procedural texture (the set of all valid parameter settings for the given procedural texture) small changes may produce large variations in the resulting texture, while in other areas the same changes may produce no variation at all. In this thesis, we present an adaptive random sampling algorithm which captures the texture range (the set of all images a procedural texture can produce) of a procedural texture by maintaining a sampling density which is consistent with the amount of change occurring in that region of the parameter domain.Texture transformations may not always be contained to a single procedural texture, and we therefore describe an approach to finding transitional points from one procedural texture to another. We present an algorithm for finding a path through the texture space formed from combining the texture range of the relevant procedural textures and their transitional points.Several examples of image-based texture matching, and texture transformations are shown. Finally, potential limitations of this work as well as future directions are discussed

    Patient-specific anatomical illustration via model-guided texture synthesis

    Get PDF
    Medical illustrations can make powerful use of textures to attractively, effectively, and efficiently visualize the appearance of the surface or cut surface of anatomic structures. It can do this by implying the anatomic structure's physical composition and clarifying its identity and 3-D shape. Current visualization methods are only capable of conveying detailed information about the orientation, internal structure, and other local properties of the anatomical objects for a typical individual, not for a particular patient. Although one can derive the shape of the individual patient's object from CT or MRI, it is important to apply these illustrative techniques to those particular shapes. In this research patient-specific anatomical illustrations are created by model-guided texture synthesis (MGTS). Given 2D exemplar textures and model-based guidance information as input, MGTS uses exemplar-based texture synthesis techniques to create patient-specific surface and solid textures. It consists of three main components. The first component includes a novel texture metamorphosis approach for creating interpolated exemplar textures given two exemplar textures. This component uses an energy optimization scheme derived from optimal control principles that utilizes intensity and structure information in obtaining the transformation. The second component consists of creating the model-based guidance information, such as directions and layers, for that specific model. This component uses coordinates implied by discrete medial 3D anatomical models (m-reps). The last component accomplishes exemplar-based texture synthesis by textures whose characteristics are spatially variant on and inside the 3D models. It considers the exemplar textures from the first component and guidance information from the second component in synthesizing high-quality, high-resolution solid and surface textures. Patient-specific illustrations with a variety of textures for different anatomical models, such as muscles and bones, are shown to be useful for our clinician to comprehend the shape of the models under radiation dose and to distinguish the models from one another
    corecore