15 research outputs found

    Markerless monocular tracking system for guided external eye surgery

    Full text link
    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360◦ rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery. © 2014 Elsevier Ltd. All rights reserved.Monserrat Aranda, C.; Rupérez Moreno, MJ.; Alcañiz Raya, ML.; Mataix, J. (2014). Markerless monocular tracking system for guided external eye surgery. Computerized Medical Imaging and Graphics. 38(8):785-792. doi:10.1016/j.compmedimag.2014.08.001S78579238

    Molecular genetic identification of skeletal remains from the Second World War Konfin I mass grave in Slovenia

    Get PDF
    This paper describes molecular genetic identification of one third of the skeletal remains of 88 victims of postwar (June 1945) killings found in the Konfin I mass grave in Slovenia. Living relatives were traced for 36 victims. We analyzed 84 right femurs and compared their genetic profiles to the genetic material of living relatives. We cleaned the bones, removed surface contamination, and ground the bones into powder. Prior to DNA isolation using Biorobot EZ1 (Qiagen), the powder was decalcified. The nuclear DNA of the samples was quantified using the real-time polymerase chain reaction method. We extracted 0.8 to 100 ng DNA/g of bone powder from 82 bones. Autosomal genetic profiles and Y-chromosome haplotypes were obtained from 98% of the bones, and mitochondrial DNA (mtDNA) haplotypes from 95% of the bones for the HVI region and from 98% of the bones for the HVII region. Genetic profiles of the nuclear and mtDNA were determined for reference persons. For traceability in the event of contamination, we created an elimination database including genetic profiles of the nuclear and mtDNA of all persons that had been in contact with the skeletal remains. When comparing genetic profiles, we matched 28 of the 84 bones analyzed with living relatives (brothers, sisters, sons, daughters, nephews, or cousins). The statistical analyses showed a high confidence of correct identification for all 28 victims in the Konfin I mass grave (posterior probability ranged from 99.9% to more than 99.999999%)

    Topological SLAM Using Fast Vision Techniques

    Get PDF
    In this paper we propose a method for vision only topological simultaneous localisation and mapping (SLAM). \ud Our approach does not use motion or odometric information but a sequence of colour histograms from visited places. In particular, we address the perceptual aliasing problem which occurs using external observations only in topological navigation. \ud \ud We propose a Bayesian inference method to incrementally build a topological map by inferring spatial relations from the sequence of observations while simultaneously estimating the robot's location. The algorithm aims to build a small map which is consistent with local adjacency information extracted from the sequence measurements. Local adjacency information is incorporated to disambiguate places which otherwise would appear to be the same.\ud \ud Experiments in an indoor environment show that the proposed technique is capable of dealing with perceptual aliasing using visual observations only and successfully performs topological SLAM

    On the use of optical flow for scene change detection and description

    No full text
    We propose the use of optical flow information as a method for detecting and describing changes in the environment, from the perspective of a mobile camera. We analyze the characteristics of the optical flow signal and demonstrate how robust flow vectors can be generated and used for the detection of depth discontinuities and appearance changes at key locations. To successfully achieve this task, a full discussion on camera positioning, distortion compensation, noise filtering, and parameter estimation is presented. We then extract statistical attributes from the flow signal to describe the location of the scene changes. We also employ clustering and dominant shape of vectors to increase the descriptiveness. Once a database of nodes (where a node is a detected scene change) and their corresponding flow features is created, matching can be performed whenever nodes are encountered, such that topological localization can be achieved. We retrieve the most likely node according to the Mahalanobis and Chi-square distances between the current frame and the database. The results illustrate the applicability of the technique for detecting and describing scene changes in diverse lighting conditions, considering indoor and outdoor environments and different robot platforms
    corecore