90 research outputs found

    Enhanced Reality Visualization in a Surgical Environment

    Get PDF
    Enhanced reality visualization is the process of enhancing an image by adding to it information which is not present in the original image. A wide variety of information can be added to an image ranging from hidden lines or surfaces to textual or iconic data about a particular part of the image. Enhanced reality visualization is particularly well suited to neurosurgery. By rendering brain structures which are not visible, at the correct location in an image of a patient's head, the surgeon is essentially provided with X-ray vision. He can visualize the spatial relationship between brain structures before he performs a craniotomy and during the surgery he can see what's under the next layer before he cuts through. Given a video image of the patient and a three dimensional model of the patient's brain the problem enhanced reality visualization faces is to render the model from the correct viewpoint and overlay it on the original image. The relationship between the coordinate frames of the patient, the patient's internal anatomy scans and the image plane of the camera observing the patient must be established. This problem is closely related to the camera calibration problem. This report presents a new approach to finding this relationship and develops a system for performing enhanced reality visualization in a surgical environment. Immediately prior to surgery a few circular fiducials are placed near the surgical site. An initial registration of video and internal data is performed using a laser scanner. Following this, our method is fully automatic, runs in nearly real-time, is accurate to within a pixel, allows both patient and camera motion, automatically corrects for changes to the internal camera parameters (focal length, focus, aperture, etc.) and requires only a single image

    Automatically Recovering Geometry and Texture from Large Sets of Calibrated Images

    Get PDF
    Three-dimensional models which contain both geometry and texture have numerous applications such as urban planning, physical simulation, and virtual environments. A major focus of computer vision (and recently graphics) research is the automatic recovery of three-dimensional models from two-dimensional images. After many years of research this goal is yet to be achieved. Most practical modeling systems require substantial human input and unlike automatic systems are not scalable. This thesis presents a novel method for automatically recovering dense surface patches using large sets (1000's) of calibrated images taken from arbitrary positions within the scene. Physical instruments, such as Global Positioning System (GPS), inertial sensors, and inclinometers, are used to estimate the position and orientation of each image. Essentially, the problem is to find corresponding points in each of the images. Once a correspondence has been established, calculating its three-dimensional position is simply a matter of geometry. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem. The initial stage of the algorithm is completely local and scales linearly with the number of images. Subsequent stages are global in nature, exploit geometric constraints, and scale quadratically with the complexity of the underlying scene. We describe techniques for: 1) detecting and localizing surface patches; 2) refining camera calibration estimates and rejecting false positive surfels; and 3) grouping surface patches into surfaces and growing the surface along a two-dimensional manifold. We also discuss a method for producing high quality, textured three-dimensional models from these surfaces. Some of the most important characteristics of this approach are that it: 1) uses and refines noisy calibration estimates; 2) compensates for large variations in illumination; 3) tolerates significant soft occlusion (e.g. tree branches); and 4) associates, at a fundamental level, an estimated normal (i.e. no frontal-planar assumption) and texture with each surface patch

    Dense Depth Maps from Epipolar Images

    Get PDF
    Recovering three-dimensional information from two-dimensional images is the fundamental goal of stereo techniques. The problem of recovering depth (three-dimensional information) from a set of images is essentially the correspondence problem: Given a point in one image, find the corresponding point in each of the other images. Finding potential correspondences usually involves matching some image property. If the images are from nearby positions, they will vary only slightly, simplifying the matching process. Once a correspondence is known, solving for the depth is simply a matter of geometry. Real images are composed of noisy, discrete samples, therefore the calculated depth will contain error. This error is a function of the baseline or distance between the images. Longer baselines result in more precise depths. This leads to a conflict: short baselines simplify the matching process, but produce imprecise results; long baselines produce precise results, but complicate the matching process. In this paper, we present a method for generating dense depth maps from large sets (1000's) of images taken from arbitrary positions. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem, removing nearly all ambiguity. The algorithm presented is completely local and for each pixel generates an evidence versus depth and surface normal distribution. In many cases, the distribution contains a clear and distinct global maximum. The location of this peak determines the depth and its shape can be used to estimate the error. The distribution can also be used to perform a maximum likelihood fit of models directly to the images. We anticipate that the ability to perform maximum likelihood estimation from purely local calculations will prove extremely useful in constructing three dimensional models from large sets of images

    Do Clinical Guidelines for Whole Body Computerised Tomography in Trauma Improve Diagnostic Accuracy and Reduce Unnecessary Investigations? A Systematic Review and Narrative Synthesis.

    Get PDF
    Introduction Whole body computerised tomography has become a standard of care for the investigation of major trauma patients. However, its use varies widely, and current clinical guidelines are not universally accepted. We undertook a systematic review of the literature to determine whether clinical guidelines for whole body computerised tomography in trauma increase its diagnostic accuracy. Materials and methods A systematic review of Medline, Cinhal and the Cochrane database, supplemented by a manual search of relevant papers was undertaken, with narrative synthesis. Studies comparing clinical guidelines to physician gestalt for the use of whole body computerised tomography in adult trauma were included. Results A total of 887 papers were identified from the electronic databases, and 1 from manual searches. Of these, seven papers fulfilled the inclusion criteria. Two papers compared clinical guidelines with routine practice: one found increased diagnostic accuracy while the other did not. Two papers investigated the performance of established clinical guidelines and demonstrated moderate sensitivity and low specificity. Two papers compared different components of established triage tools in trauma. One paper devised a de novo clinical decision rule, and demonstrated good diagnostic accuracy with the tool. The outcome criteria used to define a ‘positive’ scan varied widely, making direct comparisons between studies impossible. Conclusions Current clinical guidelines for whole body computerised tomography in trauma may increase the sensitivity of the investigation, but the evidence to support this is limited. There is a need to standardise the definition of a ‘clinically significant’ finding on CT to allow better comparison of diagnostic studies

    Two-phonon scattering of magnetorotons in fractional quantum Hall liquids

    Get PDF
    We study the phonon-assisted process of dissociation of a magnetoroton, in a fractional quantum Hall liquid, into an unbound pair of quasiparticles. Whilst the dissociation is forbidden to first order in the electron-phonon interaction, it can occur as a two-phonon process. Depending on the value of final separation between the quasiparticles, the dissociation is either a single event involving absorption of one phonon and emission of another phonon of similar energy, or a two-phonon diffusion of a quasiexciton in momentum space. The dependence of the magnetoroton dissociation time on the filling factor of the incompressible liquid is found.Comment: 4 pages, no figure

    Realtime Camera Calibration for Enhanced Reality Visualization

    No full text
    . The problem which must be solved to make realtime enhanced reality visualization possible is basically the camera calibration problem. The relationship between the coordinate frames of the patient, the patient's internal anatomy scans and the image plane of the camera observing the patient must be established. This paper presents a new approach to finding this relationship and develops a system for performing enhanced reality visualization. Given the locations of a few fiducials our method is fully automatic, runs in nearly real-time, is accurate to a fraction of a pixel, allows both patient and camera motion, automatically corrects for changes to the internal camera parameters (focal length, focus, aperture, etc.) and requires only a single video image. 1 Introduction Enhanced reality visualization is the process process of adding information to a real image. We take a model of the patient obtained from MR or CT data and overlay it on a video image of the patient. The enhanced imag..

    Geometry and Texture from Thousands of Images

    No full text
    • …
    corecore