308 research outputs found

    Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance.

    Get PDF
    OBJECTIVE: During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery. MATERIALS AND METHODS: We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system. RESULTS: In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented. CONCLUSIONS: This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Underwater 3D measurements with advanced camera modelling

    Get PDF
    A novel concept of camera modelling for underwater 3D measurements based on stereo camera utilisation is introduced. The geometrical description of the ray course subject to refraction in underwater cameras is presented under assumption of conditions, which are typically satisfied or can be achieved approximately. Possibilities of simplification are shown, which allow an approximation of the ray course by classical pinhole modelling. It is shown how the expected measurement errors can be estimated, as well as its influence on the expected 3D measurement result. Final processing of the 3D measurement data according to the requirements regarding accuracy is performed using several kinds of refinement. For example, calibration parameters can be refined, or systematic errors can be decreased by subsequent compensation by suitable error correction functions. Experimental data of simulations and real measurements obtained by two different underwater 3D scanners are presented and discussed. If inverse image magnification is larger than about one hundred, remaining errors caused by refraction effects can be usually neglected and the classical pinhole model can be used for stereo camera-based underwater 3D measurement systems

    Improving architectural 3D reconstruction by constrained modelling

    Get PDF
    Institute of Perception, Action and BehaviourThis doctoral thesis presents new techniques for improving the structural quality of automatically-acquired architectural 3D models. Common architectural properties such as parallelism and orthogonality of walls and linear structures are exploited. The locations of features such as planes and 3D lines are extracted from the model by using a probabilistic technique (RANSAC). The relationships between the planes and lines are inferred automatically using a knowledge-based architectural model. A numerical algorithm is then used to optimise the position and orientations of the features taking constraints into account. Small irregularities in the model are removed by projecting the irregularities onto the features. Planes and lines in the resulting model are therefore aligned properly to each other, and so the appearance of the resulting model is improved. Our approach is demonstrated using noisy data from both synthetic and real scenes

    Detecting stars, galaxies, and asteroids with Gaia

    Full text link
    (Abridged) Gaia aims to make a 3-dimensional map of 1,000 million stars in our Milky Way to unravel its kinematical, dynamical, and chemical structure and evolution. Gaia's on-board detection software discriminates stars from spurious objects like cosmic rays and Solar protons. For this, parametrised point-spread-function-shape criteria are used. This study aims to provide an optimum set of parameters for these filters. We developed an emulation of the on-board detection software, which has 20 free, so-called rejection parameters which govern the boundaries between stars on the one hand and sharp or extended events on the other hand. We evaluate the detection and rejection performance of the algorithm using catalogues of simulated single stars, double stars, cosmic rays, Solar protons, unresolved galaxies, and asteroids. We optimised the rejection parameters, improving - with respect to the functional baseline - the detection performance of single and double stars, while, at the same time, improving the rejection performance of cosmic rays and of Solar protons. We find that the minimum separation to resolve a close, equal-brightness double star is 0.23 arcsec in the along-scan and 0.70 arcsec in the across-scan direction, independent of the brightness of the primary. We find that, whereas the optimised rejection parameters have no significant impact on the detectability of de Vaucouleurs profiles, they do significantly improve the detection of exponential-disk profiles. We also find that the optimised rejection parameters provide detection gains for asteroids fainter than 20 mag and for fast-moving near-Earth objects fainter than 18 mag, albeit this gain comes at the expense of a modest detection-probability loss for bright, fast-moving near-Earth objects. The major side effect of the optimised parameters is that spurious ghosts in the wings of bright stars essentially pass unfiltered.Comment: Accepted for publication in A&

    Vision-assisted modeling for model-based video representations

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 134-145).by Shawn C. Becker.Ph.D
    corecore