6,344 research outputs found

    Non-homogeneous polygonal Markov fields in the plane: graphical representations and geometry of higher order correlations

    Full text link
    We consider polygonal Markov fields originally introduced by Arak and Surgailis (1989). Our attention is focused on fields with nodes of order two, which can be regarded as continuum ensembles of non-intersecting contours in the plane, sharing a number of features with the two-dimensional Ising model. We introduce non-homogeneous version of polygonal fields in anisotropic enviroment. For these fields we provide a class of new graphical constructions and random dynamics. These include a generalised dynamic representation, generalised and defective disagreement loop dynamics as well as a generalised contour birth and death dynamics. Next, we use these constructions as tools to obtain new exact results on the geometry of higher order correlations of polygonal Markov fields in their consistent regime.Comment: 54 page

    Machine learned boundary definitions for an expert's tracing assistant in image processing

    Get PDF
    Department Head: Anton Willem Bohm.Includes bibliographical references (pages 178-184).Most image processing work addressing boundary definition tasks embeds the assumption that an edge in an image corresponds to the boundary of interest in the world. In straightforward imagery this is true, however it is not always the case. There are images in which edges are indistinct or obscure, and these images can only be segmented by a human expert. The work in this dissertation addresses the range of imagery between the two extremes of those straightforward images and those requiring human guidance to appropriately segment. By freeing systems of a priori edge definitions and building in a mechanism to learn the boundary definitions needed, systems can do better and be more broadly applicable. This dissertation presents the construction of such a boundary-learning system and demonstrates the validity of this premise on real data. A framework was created for the task in which expert-provided boundary exemplars are used to create training data, which in turn are used by a neural network to learn the task and replicate the expert's boundary tracing behavior. This is the framework for the Expert's Tracing Assistant (ETA) system. For a representative set of nine structures in the Visible Human imagery, ETA was compared and contrasted to two state-of-the-art, user guided methods--Intelligent Scissors (IS) and Active Contour Models (ACM). Each method was used to define a boundary, and the distances between these boundaries and an expert's ground truth were compared. Across independent trials, there will be a natural variation in an expert's boundary tracing, and this degree of variation served as a benchmark against which these three methods were compared. For simple structural boundaries, all the methods were equivalent. However, in more difficult cases, ETA was shown to significantly better replicate the expert's boundary than either IS or ACM. In these cases, where the expert's judgement was most called into play to bound the structure, ACM and IS could not adapt to the boundary character used by the expert while ETA could

    Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

    Get PDF
    Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically.Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren

    Doctor of Philosophy

    Get PDF
    dissertationThe medial axis of an object is a shape descriptor that intuitively presents the morphology or structure of the object as well as intrinsic geometric properties of the object’s shape. These properties have made the medial axis a vital ingredient for shape analysis applications, and therefore the computation of which is a fundamental problem in computational geometry. This dissertation presents new methods for accurately computing the 2D medial axis of planar objects bounded by B-spline curves, and the 3D medial axis of objects bounded by B-spline surfaces. The proposed methods for the 3D case are the first techniques that automatically compute the complete medial axis along with its topological structure directly from smooth boundary representations. Our approach is based on the eikonal (grassfire) flow where the boundary is offset along the inward normal direction. As the boundary deforms, different regions start intersecting with each other to create the medial axis. In the generic situation, the (self-) intersection set is born at certain creation-type transition points, then grows and undergoes intermediate transitions at special isolated points, and finally ends at annihilation-type transition points. The intersection set evolves smoothly in between transition points. Our approach first computes and classifies all types of transition points. The medial axis is then computed as a time trace of the evolving intersection set of the boundary using theoretically derived evolution vector fields. This dynamic approach enables accurate tracking of elements of the medial axis as they evolve and thus also enables computation of topological structure of the solution. Accurate computation of geometry and topology of 3D medial axes enables a new graph-theoretic method for shape analysis of objects represented with B-spline surfaces. Structural components are computed via the cycle basis of the graph representing the 1-complex of a 3D medial axis. This enables medial axis based surface segmentation, and structure based surface region selection and modification. We also present a new approach for structural analysis of 3D objects based on scalar functions defined on their surfaces. This approach is enabled by accurate computation of geometry and structure of 2D medial axes of level sets of the scalar functions. Edge curves of the 3D medial axis correspond to a subset of ridges on the bounding surfaces. Ridges are extremal curves of principal curvatures on a surface indicating salient intrinsic features of its shape, and hence are of particular interest as tools for shape analysis. This dissertation presents a new algorithm for accurately extracting all ridges directly from B-spline surfaces. The proposed technique is also extended to accurately extract ridges from isosurfaces of volumetric data using smooth implicit B-spline representations. Accurate ridge curves enable new higher-order methods for surface analysis. We present a new definition of salient regions in order to capture geometrically significant surface regions in the neighborhood of ridges as well as to identify salient segments of ridges

    Reconstruction of industrial piping installations from laser point clouds using profiling techniques

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 143-152).As-built models of industrial piping installations are essential for planning applications in industry. Laser scanning has emerged as the preferred data acquisition method of as built information for creating these three dimensional (3D) models. The product of the scanning process is a cloud of points representing scanned surfaces. From this point cloud, 3D models of the surfaces are reconstructed. Most surfaces are of piping elements e.g. straight pipes, t-junctions, elbows, spheres. The automatic detection of these piping elements in point clouds has the greatest impact on the reconstructed model. Various algorithms have been proposed for detecting piping elements in point clouds. However, most algorithms detect cylinders (straight pipes) and planes which make up a small percentage of piping elements found in industrial installations. In addition, these algorithms do not allow for deformation detection in pipes. Therefore, the work in this research is aimed at the detection of piping elements (straight pipes, elbows, t-junctions and flange) in point clouds including deformation detection

    Coronal loop detection from solar images and extraction of salient contour groups from cluttered images.

    Get PDF
    This dissertation addresses two different problems: 1) coronal loop detection from solar images: and 2) salient contour group extraction from cluttered images. In the first part, we propose two different solutions to the coronal loop detection problem. The first solution is a block-based coronal loop mining method that detects coronal loops from solar images by dividing the solar image into fixed sized blocks, labeling the blocks as Loop or Non-Loop , extracting features from the labeled blocks, and finally training classifiers to generate learning models that can classify new image blocks. The block-based approach achieves 64% accuracy in IO-fold cross validation experiments. To improve the accuracy and scalability, we propose a contour-based coronal loop detection method that extracts contours from cluttered regions, then labels the contours as Loop and Non-Loop , and extracts geometric features from the labeled contours. The contour-based approach achieves 85% accuracy in IO-fold cross validation experiments, which is a 20% increase compared to the block-based approach. In the second part, we propose a method to extract semi-elliptical open curves from cluttered regions. Our method consists of the following steps: obtaining individual smooth contours along with their saliency measures; then starting from the most salient contour, searching for possible grouping options for each contour; and continuing the grouping until an optimum solution is reached. Our work involved the design and development of a complete system for coronal loop mining in solar images, which required the formulation of new Gestalt perceptual rules and a systematic methodology to select and combine them in a fully automated judicious manner using machine learning techniques that eliminate the need to manually set various weight and threshold values to define an effective cost function. After finding salient contour groups, we close the gaps within the contours in each group and perform B-spline fitting to obtain smooth curves. Our methods were successfully applied on cluttered solar images from TRACE and STEREO/SECCHI to discern coronal loops. Aerial road images were also used to demonstrate the applicability of our grouping techniques to other contour-types in other real applications

    Similarity Measurement of Breast Cancer Mammographic Images Using Combination of Mesh Distance Fourier Transform and Global Features

    Get PDF
    Similarity measurement in breast cancer is an important aspect of determining the vulnerability of detected masses based on the previous cases. It is used to retrieve the most similar image for a given mammographic query image from a collection of previously archived images. By analyzing these results, doctors and radiologists can more accurately diagnose early-stage breast cancer and determine the best treatment. The direct result is better prognoses for breast cancer patients. Similarity measurement in images has always been a challenging task in the field of pattern recognition. A widely-adopted strategy in Content-Based Image Retrieval (CBIR) is comparison of local shape-based features of images. Contours summarize the orientations and sizes images, allowing for heuristic approach in measuring similarity between images. Similarly, global features of an image have the ability to generalize the entire object with a single vector which is also an important aspect of CBIR. The main objective of this paper is to enhance the similarity measurement between query images and database images so that the best match is chosen from the database for a particular query image, thus decreasing the chance of false positives. In this paper, a method has been proposed which compares both local and global features of images to determine their similarity. Three image filters are applied to make this comparison. First, we filter using the mesh distance Fourier descriptor (MDFD), which is based on the calculation of local features of the mammographic image. After this filter is applied, we retrieve the five most similar images from the database. Two additional filters are applied to the resulting image set to determine the best match. Experiments show that this proposed method overcomes shortcomings of existing methods, increasing accuracy of matches from 68% to 88%
    • …
    corecore