126,550 research outputs found

    Region-based segmentation of images using syntactic visual features

    Get PDF
    This paper presents a robust and efficient method for segmentation of images into large regions that reflect the real world objects present in the scene. We propose an extension to the well known Recursive Shortest Spanning Tree (RSST) algorithm based on a new color model and so-called syntactic features [1]. We introduce practical solutions, integrated within the RSST framework, to structure analysis based on the shape and spatial configuration of image regions. We demonstrate that syntactic features provide a reliable basis for region merging criteria which prevent formation of regions spanning more than one semantic object, thereby significantly improving the perceptual quality of the output segmentation. Experiments indicate that the proposed features are generic in nature and allow satisfactory segmentation of real world images from various sources without adjustment to algorithm parameters

    Edge Potential Functions (EPF) and Genetic Algorithms (GA) for Edge-Based Matching of Visual Objects

    Get PDF
    Edges are known to be a semantically rich representation of the contents of a digital image. Nevertheless, their use in practical applications is sometimes limited by computation and complexity constraints. In this paper, a new approach is presented that addresses the problem of matching visual objects in digital images by combining the concept of Edge Potential Functions (EPF) with a powerful matching tool based on Genetic Algorithms (GA). EPFs can be easily calculated starting from an edge map and provide a kind of attractive pattern for a matching contour, which is conveniently exploited by GAs. Several tests were performed in the framework of different image matching applications. The results achieved clearly outline the potential of the proposed method as compared to state of the art methodologies. (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Reverse Engineering Trimmed NURB Surfaces From Laser Scanned Data

    Get PDF
    A common reverse engineering problem is to convert several hundred thousand points collected from the surface of an object via a digitizing process, into a coherent geometric model that is easily transferred to a CAD software such as a solid modeler for either design improvement or manufacturing and analysis. These data are very dense and make data-set manipulation difficult and tedious. Many commercial solutions exist but involve time consuming interaction to go from points to surface meshes such as BSplines or NURBS (Non Uniform Rational BSplines). Our approach differs from current industry practice in that we produce a mesh with little or no interaction from the user. The user can produce degree 2 and higher BSpline surfaces and can choose the degree and number ofsegments as parameters to the system. The BSpline surface is both compact and curvature continuous. The former property reduces the large storage overhead, and the later implies a smooth can be created from noisy data. In addition, the nature ofthe BSpline allows one to easily and smoothly alter the surface, making re-engineering extremely feasible. The BSpline surface is created using the principle ofhigher orders least squares with smoothing functions at the edges. Both linear and cylindrical data sets are handled using an automated parameterization method. Also, because ofthe BSpline's continuous nature, a multiresolutional-triangulated mesh can quickly be produced. This last fact means that an STL file is simple to generate. STL files can also be easily used as input to the system.Mechanical Engineerin

    Efficient contour-based shape representation and matching

    Get PDF
    This paper presents an efficient method for calculating the similarity between 2D closed shape contours. The proposed algorithm is invariant to translation, scale change and rotation. It can be used for database retrieval or for detecting regions with a particular shape in video sequences. The proposed algorithm is suitable for real-time applications. In the first stage of the algorithm, an ordered sequence of contour points approximating the shapes is extracted from the input binary images. The contours are translation and scale-size normalized, and small sets of the most likely starting points for both shapes are extracted. In the second stage, the starting points from both shapes are assigned into pairs and rotation alignment is performed. The dissimilarity measure is based on the geometrical distances between corresponding contour points. A fast sub-optimal method for solving the correspondence problem between contour points from two shapes is proposed. The dissimilarity measure is calculated for each pair of starting points. The lowest dissimilarity is taken as the final dissimilarity measure between two shapes. Three different experiments are carried out using the proposed approach: letter recognition using a web camera, our own simulation of Part B of the MPEG-7 core experiment “CE-Shape1” and detection of characters in cartoon video sequences. Results indicate that the proposed dissimilarity measure is aligned with human intuition
    corecore