2,140 research outputs found

    A fully automatic CAD-CTC system based on curvature analysis for standard and low-dose CT data

    Get PDF
    Computed tomography colonography (CTC) is a rapidly evolving noninvasive medical investigation that is viewed by radiologists as a potential screening technique for the detection of colorectal polyps. Due to the technical advances in CT system design, the volume of data required to be processed by radiologists has increased significantly, and as a consequence the manual analysis of this information has become an increasingly time consuming process whose results can be affected by inter- and intrauser variability. The aim of this paper is to detail the implementation of a fully integrated CAD-CTC system that is able to robustly identify the clinically significant polyps in the CT data. The CAD-CTC system described in this paper is a multistage implementation whose main system components are: 1) automatic colon segmentation; 2) candidate surface extraction; 3) feature extraction; and 4) classification. Our CAD-CTC system performs at 100% sensitivity for polyps larger than 10 mm, 92% sensitivity for polyps in the range 5 to 10 mm, and 57.14% sensitivity for polyps smaller than 5 mm with an average of 3.38 false positives per dataset. The developed system has been evaluated on synthetic and real patient CT data acquired with standard and low-dose radiation levels

    Tailor: understanding 3D shapes using curvature

    Get PDF
    Tools for the automatic decomposition of a surface into shape features will facilitate the editing, matching, texturing, morphing, compression, and simplification of 3D shapes. Different features, such as flats, limbs, tips, pits, and various blending shapes that transition between them may be characterized in terms of local curvature and other differential properties of the surface or in terms of a global skeletal organization of the volume it encloses. Unfortunately, both solutions are extremely sensitive to small perturbations in the surface smoothness and to quantization effects when they operate on triangulated surfaces. Thus, we propose a multi-resolution approach, which not only estimates the curvature of a vertex over neighborhoods of variable size, but also takes into account the topology of the surface in that neighborhood. Our approach is based on blowing a spherical bubble at each vertex and studying how the intersection of that bubble with the surface evolves. For example, for a thin limb, that intersection will start simply connected and will rapidly split into two components. For a point on the tip of a limb, that intersection will usually simply remain connected, but the ratio of its length to the radius of the bubble will be decreasing. For a point on a blend, that ratio will exceed 2p. We describe an efficient approach for computing these characteristics for a sampled set of bubble radii and for using them to identify features, based on easily formulated f i lters, that may capture the needs of a particular application

    Automatic Image Registration in Infrared-Visible Videos using Polygon Vertices

    Full text link
    In this paper, an automatic method is proposed to perform image registration in visible and infrared pair of video sequences for multiple targets. In multimodal image analysis like image fusion systems, color and IR sensors are placed close to each other and capture a same scene simultaneously, but the videos are not properly aligned by default because of different fields of view, image capturing information, working principle and other camera specifications. Because the scenes are usually not planar, alignment needs to be performed continuously by extracting relevant common information. In this paper, we approximate the shape of the targets by polygons and use affine transformation for aligning the two video sequences. After background subtraction, keypoints on the contour of the foreground blobs are detected using DCE (Discrete Curve Evolution)technique. These keypoints are then described by the local shape at each point of the obtained polygon. The keypoints are matched based on the convexity of polygon's vertices and Euclidean distance between them. Only good matches for each local shape polygon in a frame, are kept. To achieve a global affine transformation that maximises the overlapping of infrared and visible foreground pixels, the matched keypoints of each local shape polygon are stored temporally in a buffer for a few number of frames. The matrix is evaluated at each frame using the temporal buffer and the best matrix is selected, based on an overlapping ratio criterion. Our experimental results demonstrate that this method can provide highly accurate registered images and that we outperform a previous related method

    Segmentation of Range Images as the Search for the Best Description of the Scene in Terms of Geometric Primitives

    Get PDF
    Segmentation of range images has long been considered in computer vision as an important but extremely difficult problem. In this paper we present a new paradigm for the segmentation of range images into piecewise continuous patches. Data aggregation is performed via model recovery in terms of variable-order bi-variate polynomials using iterative regression. All the recovered models are potential candidates for the final description of the data. Selection of the models is achieved through a maximization of quadratic Boolean problem. The procedure can be adapted to prefer certain kind of descriptions (one which describes more data points, or has smaller error, or has lower order model). We have developed a fast optimization procedure for model selection. The major novelty of the approach is in combining model extraction and model selection in a dynamic way. Partial recovery of the models is followed by the optimization (selection) procedure where only the best models are allowed to develop further. The results obtained in this way are comparable with the results obtained when using the selection module only after all the models are fully recovered, while the computational complexity is significantly reduced. We test the procedure on several real range images

    Automated Extraction of Flow Features

    Get PDF
    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense

    Handwriting style classification

    Get PDF
    This paper describes an independent handwriting style classifier that has been designed to select the best recognizer for a given style of writing. For this purpose a definition of handwriting legibility has been defined and a method implemented that can predict this legibility. The technique consists of two phases. In the feature-extraction phase, a set of 36 features is extracted from the image contour. In the classification phase, two nonparametric classification techniques are applied to the extracted features in order to compare their effectiveness in classifying words into legible, illegible, and middle classes. In the first method, a multiple discriminant analysis (MDA) is used to transform the space of extracted features (36 dimensions) into an optimal discriminant space for a nearest mean based classifier. In the second method, a probabilistic neural network (PNN) based on the Bayes strategy and nonparametric estimation of probability density function is used. The experimental results show that the PNN method gives superior classification results when compared with the MDA method. For the legible, illegible, and middle handwriting the method provides 86.5% (legible/illegible), 65.5% (legible/middle), and 90.5% (middle/illegible) correct classification for two classes. For the three-class legibility classification the rate of correct classification is 67.33% using a PNN classifier

    Edge-Sharpener: A geometric filter for recovering sharp features in uniform triangulations

    Get PDF
    3D scanners, iso-surface extraction procedures, and several recent geometric compression schemes sample surfaces of 3D shapes in a regular fashion, without any attempt to align the samples with the sharp edges and corners of the original shape. Consequently, the interpolating triangle meshes chamfer these sharp features and thus exhibit significant errors. The new Edge-Sharpener filter introduced here identifies the chamfer edges and subdivides them and their incident triangles by inserting new vertices and by forcing these vertices to lie on intersections of planes that locally approximate the smooth surfaces that meet at these sharp features. This post-processing significantly reduces the error produced by the initial sampling process. For example, we have observed that the L2 error introduced by the SwingWrapper9 remeshing-based compressor can be reduced down to a fifth by executing Edge-Sharpener after decompression, with no additional information
    corecore