7 research outputs found

    Recognition of plants using a stochastic L-system model

    Get PDF
    Recognition of natural shapes like leaves, plants, and trees, has proven to be a challenging problem in computer vision. The members of a class of natural objects are not identical to each other. They are similar, have similar features, but are not exactly the same. Most existing techniques have not succeeded in effectively recognizing these objects. One of the main reasons is that the models used to represent them are inadequate themselves. In this research we use a fractal model, which has been very effective in modeling natural shapes, to represent and then guide the recognition of a class of natural objects, namely plants. Variation in plants is accommodated by using the stochastic L-systems. A learning system is then used to generate a decision tree that can be used for classification. Results show that the approach is successful for a large class of synthetic plants and provides the basis for further research into recognition of natural plants

    A Fractal Shape Signature

    Full text link

    Modeling, Estimation, and Pattern Analysis of Random Texture on 3-D Surfaces

    Get PDF
    To recover 3-D structure from a shaded and textural surface image involving textures, neither the Shape-from-shading nor the Shape-from-texture analysis is enough, because both radiance and texture information coexist within the scene surface. A new 3-D texture model is developed by considering the scene image as the superposition of a smooth shaded image and a random texture image. To describe the random part, the orthographical projection is adapted to take care of the non-isotropic distribution function of the intensity due to the slant and tilt of a 3-D textures surface, and the Fractional Differencing Periodic (FDP) model is chosen to describe the random texture, because this model is able to simultaneously represent the coarseness and the pattern of the 3-D texture surface, and enough flexible to synthesize both long-term and short-term correlation structures of random texture. Since the object is described by the model involving several free parameters and the values of these parameters are determined directly from its projected image, it is possible to extract 3-D information and texture pattern directly from the image without any preprocessing. Thus, the cumulative error obtained from each pre-processing can be minimized. For estimating the parameters, a hybrid method which uses both the least square and the maximum likelihood estimates is applied and the estimation of parameters and the synthesis are done in frequency domain. Among the texture pattern features which can be obtained from a single surface image, Fractal scaling parameter plays a major role for classifying and/or segmenting the different texture patterns tilted and slanted due to the 3-dimensional rotation, because of its rotational and scaling invariant properties. Also, since the Fractal scaling factor represents the coarseness of the surface, each texture pattern has its own Fractal scale value, and particularly at the boundary between the different textures, it has relatively higher value to the one within a same texture. Based on these facts, a new classification method and a segmentation scheme for the 3-D rotated texture patterns are develope

    Methods for Estimation of Intrinsic Dimensionality

    Get PDF
    Dimension reduction is an important tool used to describe the structure of complex data (explicitly or implicitly) through a small but sufficient number of variables, and thereby make data analysis more efficient. It is also useful for visualization purposes. Dimension reduction helps statisticians to overcome the ‘curse of dimensionality’. However, most dimension reduction techniques require the intrinsic dimension of the low-dimensional subspace to be fixed in advance. The availability of reliable intrinsic dimension (ID) estimation techniques is of major importance. The main goal of this thesis is to develop algorithms for determining the intrinsic dimensions of recorded data sets in a nonlinear context. Whilst this is a well-researched topic for linear planes, based mainly on principal components analysis, relatively little attention has been paid to ways of estimating this number for non–linear variable interrelationships. The proposed algorithms here are based on existing concepts that can be categorized into local methods, relying on randomly selected subsets of a recorded variable set, and global methods, utilizing the entire data set. This thesis provides an overview of ID estimation techniques, with special consideration given to recent developments in non–linear techniques, such as charting manifold and fractal–based methods. Despite their nominal existence, the practical implementation of these techniques is far from straightforward. The intrinsic dimension is estimated via Brand’s algorithm by examining the growth point process, which counts the number of points in hyper-spheres. The estimation needs to determine the starting point for each hyper-sphere. In this thesis we provide settings for selecting starting points which work well for most data sets. Additionally we propose approaches for estimating dimensionality via Brand’s algorithm, the Dip method and the Regression method. Other approaches are proposed for estimating the intrinsic dimension by fractal dimension estimation methods, which exploit the intrinsic geometry of a data set. The most popular concept from this family of methods is the correlation dimension, which requires the estimation of the correlation integral for a ball of radius tending to 0. In this thesis we propose new approaches to approximate the correlation integral in this limit. The new approaches are the Intercept method, the Slop method and the Polynomial method. In addition we propose a new approach, a localized global method, which could be defined as a local version of global ID methods. The objective of the localized global approach is to improve the algorithm based on a local ID method, which could significantly reduce the negative bias. Experimental results on real world and simulated data are used to demonstrate the algorithms and compare them to other methodology. A simulation study which verifies the effectiveness of the proposed methods is also provided. Finally, these algorithms are contrasted using a recorded data set from an industrial melter process

    Numerical Linear Algebra applications in Archaeology: the seriation and the photometric stereo problems

    Get PDF
    The aim of this thesis is to explore the application of Numerical Linear Algebra to Archaeology. An ordering problem called the seriation problem, used for dating findings and/or artifacts deposits, is analysed in terms of graph theory. In particular, a Matlab implementation of an algorithm for spectral seriation, based on the use of the Fiedler vector of the Laplacian matrix associated with the problem, is presented. We consider bipartite graphs for describing the seriation problem, since the interrelationship between the units (i.e. archaeological sites) to be reordered, can be described in terms of these graphs. In our archaeological metaphor of seriation, the two disjoint nodes sets into which the vertices of a bipartite graph can be divided, represent the excavation sites and the artifacts found inside them. Since it is a difficult task to determine the closest bipartite network to a given one, we describe how a starting network can be approximated by a bipartite one by solving a sequence of fairly simple optimization problems. Another numerical problem related to Archaeology is the 3D reconstruction of the shape of an object from a set of digital pictures. In particular, the Photometric Stereo (PS) photographic technique is considered

    Shape classification: towards a mathematical description of the face

    Get PDF
    Recent advances in biostereometric techniques have led to the quick and easy acquisition of 3D data for facial and other biological surfaces. This has led facial surgeons to express dissatisfaction with landmark-based methods for analysing the shape of the face which use only a small part of the data available, and to seek a method for analysing the face which maximizes the use of this extensive data set. Scientists working in the field of computer vision have developed a variety of methods for the analysis and description of 2D and 3D shape. These methods are reviewed and an approach, based on differential geometry, is selected for the description of facial shape. For each data point, the Gaussian and mean curvatures of the surface are calculated. The performance of three algorithms for computing these curvatures are evaluated for mathematically generated standard 3D objects and for 3D data obtained from an optical surface scanner. Using the signs of these curvatures, the face is classified into eight 'fundamental surface types' - each of which has an intuitive perceptual meaning. The robustness of the resulting surface type description to errors in the data is determined together with its repeatability. Three methods for comparing two surface type descriptions are presented and illustrated for average male and average female faces. Thus a quantitative description of facial change, or differences between individual's faces, is achieved. The possible application of artificial intelligence techniques to automate this comparison is discussed. The sensitivity of the description to global and local changes to the data, made by mathematical functions, is investigated. Examples are given of the application of this method for describing facial changes made by facial reconstructive surgery and implications for defining a basis for facial aesthetics using shape are discussed. It is also applied to investigate the role played by the shape of the surface in facial recognition
    corecore