975 research outputs found

    Optic nerve head segmentation

    Get PDF
    Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 /spl mu//pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83% of cases, and performs well even on blurred image

    Failure Detection for Laser-based SLAM in Urban and Peri-Urban Environments

    Get PDF
    International audienceSimultaneous Localization And Mapping (SLAM) is considered as one of the key solutions for making mobile robots truly autonomous. Based mainly on perceptive information, the SLAM concept is assumed to solve localization and provide a map of the surrounding environment simultaneously. In this paper, we study SLAM limitations and we propose an approach to detect a priori potential failure scenarios for 2D laser-based SLAM methods. Our approach makes use of raw sensor data, which makes it independent of the underlying SLAM implementation, to extract a relevant descriptors vector. This descriptors vector is then used together with a decision-making algorithm to detect failure scenarios. Our approach is evaluated using different decision algorithms through three realistic experiments

    Partial shape matching using CCP map and weighted graph transformation matching

    Get PDF
    La dĂ©tection de la similaritĂ© ou de la diffĂ©rence entre les images et leur mise en correspondance sont des problĂšmes fondamentaux dans le traitement de l'image. Pour rĂ©soudre ces problĂšmes, on utilise, dans la littĂ©rature, diffĂ©rents algorithmes d'appariement. MalgrĂ© leur nouveautĂ©, ces algorithmes sont pour la plupart inefficaces et ne peuvent pas fonctionner correctement dans les situations d’images bruitĂ©es. Dans ce mĂ©moire, nous rĂ©solvons la plupart des problĂšmes de ces mĂ©thodes en utilisant un algorithme fiable pour segmenter la carte des contours image, appelĂ©e carte des CCPs, et une nouvelle mĂ©thode d'appariement. Dans notre algorithme, nous utilisons un descripteur local qui est rapide Ă  calculer, est invariant aux transformations affines et est fiable pour des objets non rigides et des situations d’occultation. AprĂšs avoir trouvĂ© le meilleur appariement pour chaque contour, nous devons vĂ©rifier si ces derniers sont correctement appariĂ©s. Pour ce faire, nous utilisons l'approche « Weighted Graph Transformation Matching » (WGTM), qui est capable d'Ă©liminer les appariements aberrants en fonction de leur proximitĂ© et de leurs relations gĂ©omĂ©triques. WGTM fonctionne correctement pour les objets Ă  la fois rigides et non rigides et est robuste aux distorsions importantes. Pour Ă©valuer notre mĂ©thode, le jeu de donnĂ©es ETHZ comportant cinq classes diffĂ©rentes d'objets (bouteilles, cygnes, tasses, girafes, logos Apple) est utilisĂ©. Enfin, notre mĂ©thode est comparĂ©e Ă  plusieurs mĂ©thodes cĂ©lĂšbres proposĂ©es par d'autres chercheurs dans la littĂ©rature. Bien que notre mĂ©thode donne un rĂ©sultat comparable Ă  celui des mĂ©thodes de rĂ©fĂ©rence en termes du rappel et de la prĂ©cision de localisation des frontiĂšres, elle amĂ©liore significativement la prĂ©cision moyenne pour toutes les catĂ©gories du jeu de donnĂ©es ETHZ.Matching and detecting similarity or dissimilarity between images is a fundamental problem in image processing. Different matching algorithms are used in literature to solve this fundamental problem. Despite their novelty, these algorithms are mostly inefficient and cannot perform properly in noisy situations. In this thesis, we solve most of the problems of previous methods by using a reliable algorithm for segmenting image contour map, called CCP Map, and a new matching method. In our algorithm, we use a local shape descriptor that is very fast, invariant to affine transform, and robust for dealing with non-rigid objects and occlusion. After finding the best match for the contours, we need to verify if they are correctly matched. For this matter, we use the Weighted Graph Transformation Matching (WGTM) approach, which is capable of removing outliers based on their adjacency and geometrical relationships. WGTM works properly for both rigid and non-rigid objects and is robust to high order distortions. For evaluating our method, the ETHZ dataset including five diverse classes of objects (bottles, swans, mugs, giraffes, apple-logos) is used. Finally, our method is compared to several famous methods proposed by other researchers in the literature. While our method shows a comparable result to other benchmarks in terms of recall and the precision of boundary localization, it significantly improves the average precision for all of the categories in the ETHZ dataset

    Capacity of shrinking condensers in the plane

    Full text link
    We show that the capacity of a class of plane condensers is comparable to the capacity of corresponding "dyadic condensers". As an application, we show that for plane condensers in that class the capacity blows up as the distance between the plates shrinks, but there can be no asymptotic estimate of the blow-up

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    Asymptotically efficient estimators for geometric shape fitting and source localization

    Get PDF
    Solving the nonlinear estimation problem is known to be a challenging task because of the implicit relationship between the measurement data and the unknown parameters to be estimated. Iterative methods such as the Taylor-series expansion based ML estimator are presented in this thesis to solve the nonlinear estimation problem. However, they might suffer from the initialization and convergence problems. Other than the iterative methods, this thesis aims to provide a computational effective, asymptotically efficient and closed-form solution to the nonlinear estimation problem. Two kinds of classic nonlinear estimation problems are considered: the geometric shape fitting problem and the source localization problem. For the geometric shape fitting, the research in this thesis focuses on the circle and the ellipse fittings. Three iterative methods for the fitting of a single circle: the ML method, the FLS method and the SDP method, are provided and their performances are analyzed. To overcome the limitations of the iterative methods, asymptotically efficient and closed-form solutions for both the circle and ellipse fittings are derived. The good performances of the proposed solutions are supported by simulations using synthetic data as well as experiments on real images. The localization of a source via a group of sensors is another important nonlinear estimation problem studied in this thesis. Based on the TOA measurements, the CRLB and MSE results of a source location when sensor position errors are present are derived and compared to show the estimation performance loss due to the sensor position errors. A closed-formed estimator that takes into account the sensor position errors is then proposed. To further improve the sensor position and the source location estimates, an algebraic solution that jointly estimates the source and sensor positions is provided, which provides better performance in sensor position estimates at higher noise level comparing to the sequential estimation-refinement technique. The TOA based CRLB and MSE studies are further extended to the TDOA and AOA cases. Through the analysis one interesting result has been found: there are situations exist where taking into account the sensor position errors when estimating the source location will not improve the estimation accuracy. In such cases a calibration emitter with known position is needed to limit the estimation damage caused by the sensor position uncertainties. Investigation has been implemented to find out where would be the optimum position to place the calibration emitter. When the optimum calibration source position may be of theoretical interest only, a practical suboptimum criterion is developed which yields a better calibration emitter position than the closest to the unknown source criterion

    Temperature Diffusivity Measurement and Nondestructive Testing Requiring No Extensive Sample Preparation and Using Stepwise Point Heating and IR Thermography

    Get PDF
    This chapter describes a modification to the laser flash method that allows determining temperature diffusivity and nondestructive testing of materials and constructions without cutting samples of predefined geometry. Stepwise local heating of the studied object surface at a small spot around 0.1 mm radius with simultaneous high temporary-spatial resolution infrared (IR) filming of the transient temperature distribution evolution with a thermal camera provides a wide range of possibilities for material characterization and sample testing. In case of isotropic and macroscopic homogeneous materials, the resulting transient temperature distribution is radially symmetric that renders possible to improve temperature measurement accuracy by averaging many pixels of the IR images located at the same distance from the heating spot center. The temperature diffusivity measurement can be conducted either on thin plates or on massive samples. The developed emissivity independent in plain IR thermographic method and mathematical algorithms enable thermal diffusivity measurement for both cases with accuracy around a few per cent for a wide range of materials starting from refractory ceramics to well-conducting metals. To detect defects, the differential algorithm was used. Subtracting averaged radial symmetric temperature distribution from the original one for each frame makes local inhomogeneities in the sample under study clearly discernible. When applied to crack detection in plates, the technique demonstrates good sensitivity to part-through cracks located both at the visible and invisible sides of the studied object
    • 

    corecore