9 research outputs found

    Eliciting Perceptual Ground Truth for Image Segmentation

    Get PDF
    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for human decomposition of figurative images. The results suggest that the human breakdowns follow well-known perceptual principles in particular the Gestalt laws

    Skeleton computation of an image using a geometric approach

    Get PDF
    In this work we develop two algorithms to compute the skeleton of a binary 2D images. Both algorithms follow a geometric approach and work directly with the boundary of the image wich is an orthogonal polygon (OP). One of these algorithms processes the edges of the polygon while the other uses its vertices. Compared to a thinning method, the presented algorithms show a good performance.Postprint (published version

    Fast Approximate Convex Decomposition

    Get PDF
    Approximate convex decomposition (ACD) is a technique that partitions an input object into "approximately convex" components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n_c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n_c + 1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods given in the Princeton Shape Benchmark

    VLSI Routing for Advanced Technology

    Get PDF
    Routing is a major step in VLSI design, the design process of complex integrated circuits (commonly known as chips). The basic task in routing is to connect predetermined locations on a chip (pins) with wires which serve as electrical connections. One main challenge in routing for advanced chip technology is the increasing complexity of design rules which reflect manufacturing requirements. In this thesis we investigate various aspects of this challenge. First, we consider polygon decomposition problems in the context of VLSI design rules. We introduce different width notions for polygons which are important for width-dependent design rules in VLSI routing, and we present efficient algorithms for computing width-preserving decompositions of rectilinear polygons into rectangles. Such decompositions are used in routing to allow for fast design rule checking. A main contribution of this thesis is an O(n) time algorithm for computing a decomposition of a simple rectilinear polygon with n vertices into O(n) rectangles, preseverving two-dimensional width. Here the two-dimensional width at a point of the polygon is defined as the edge length of a largest square that contains the point and is contained in the polygon. In order to obtain these results we establish a connection between such decompositions and Voronoi diagrams. Furthermore, we consider implications of multiple patterning and other advanced design rules for VLSI routing. The main contribution in this context is the detailed description of a routing approach which is able to manage such advanced design rules. As a main algorithmic concept we use multi-label shortest paths where certain path properties (which model design rules) can be enforced by defining labels assigned to path vertices and allowing only certain label transitions. The described approach has been implemented in BonnRoute, a VLSI routing tool developed at the Research Institute for Discrete Mathematics, University of Bonn, in cooperation with IBM. We present experimental results confirming that a flow combining BonnRoute and an external cleanup step produces far superior results compared to an industry standard router. In particular, our proposed flow runs more than twice as fast, reduces the via count by more than 20%, the wiring length by more than 10%, and the number of remaining design rule errors by more than 60%. These results obtained by applying our multiple patterning approach to real-world chip instances provided by IBM are another main contribution of this thesis. We note that IBM uses our proposed combined BonnRoute flow as the default tool for signal routing

    Decomposing and packing polygons / Dania el-Khechen.

    Get PDF
    In this thesis, we study three different problems in the field of computational geometry: the partitioning of a simple polygon into two congruent components, the partitioning of squares and rectangles into equal area components while minimizing the perimeter of the cuts, and the packing of the maximum number of squares in an orthogonal polygon. To solve the first problem, we present three polynomial time algorithms which given a simple polygon P partitions it, if possible, into two congruent and possibly nonsimple components P 1 and P 2 : an O ( n 2 log n ) time algorithm for properly congruent components and an O ( n 3 ) time algorithm for mirror congruent components. In our analysis of the second problem, we experimentally find new bounds on the optimal partitions of squares and rectangles into equal area components. The visualization of the best determined solutions allows us to conjecture some characteristics of a class of optimal solutions. Finally, for the third problem, we present three linear time algorithms for packing the maximum number of unit squares in three subclasses of orthogonal polygons: the staircase polygons, the pyramids and Manhattan skyline polygons. We also study a special case of the problem where the given orthogonal polygon has vertices with integer coordinates and the squares to pack are (2 {604} 2) squares. We model the latter problem with a binary integer program and we develop a system that produces and visualizes optimal solutions. The observation of such solutions aided us in proving some characteristics of a class of optimal solutions

    Computational Topology Methods for Shape Modelling Applications

    Get PDF
    This thesis deals with computational topology, a recent branch of research that involves both mathematics and computer science, and tackles the problem of discretizing the Morse theory to functions defined on a triangle mesh. The application context of Morse theory in general, and Reeb graphs in particular, deals with the analysis of geometric shapes and the extraction of skeletal structures that synthetically represents shape, preserving the topological properties and the main morphological characteristics. Regarding Computer Graphics, shapes, that is a one-, two- or higher- dimensional connected, compact space having a visual appearance, are typically approximated by digital models. Since topology focuses on the qualitative properties of spaces, such as the connectedness and how many and what type of holes it has, topology is the best tool to describe the shape of a mathematical model at a high level of abstraction. Geometry, conversely, is mainly related to the quantitative characteristics of a shape. Thus, the combination of topology and geometry creates a new generation of tools that provide a computational description of the most representative features of the shape along with their relationship. Extracting qualitative information, that is the information related to semantic of the shape and its morphological structure, from discrete models is a central goal in shape modeling. In this thesis a conceptual model is proposed which represents a given surface based on topological coding that defines a sketch of the surface, discarding irrelevant details and classifying its topological type. The approach is based on Morse theory and Reeb graphs, which provide a very useful shape abstraction method for the analysis and structuring of the information contained in the geometry of the discrete shape model. To fully develop the method, both theoretical and computational aspects have been considered, related to the definition and the extension of the Reeb graph to the discrete domain. For the definition and automatic construction of the conceptual model, a new method has been developed that analyzes and characterizes a triangle mesh with respect to the behavior of a real and at least continuous function defined on the mesh. The proposed solution handles also degenerate critical points, such as non-isolated critical points. To do that, the surface model is characterized using a contour-based strategy, recognizing critical areas instead of critical points and coding the evolution of the contour levels in a graph-like structure, named Extended Reeb Graph, (ERG), which is a high-level abstract model suitable for representing and manipulating piece-wise linear surfaces. The descriptive power of the (ERG) has been also augmented with the introduction of geometric information together with the topological ones, and it has been also studied the relation between the extracted topological and morphological features with respect to the real characteristics of the surface, giving and evaluation of the dimension of the discarded details. Finally, the effectiveness of our description framework has been evaluated in several application contexts

    Der Semantic Building Modeler - Ein System zur prozeduralen Erzeugung von 3D-Gebäudemodellen

    Get PDF
    Computer generated 3d-models of buildings, cities and whole landscapes are constantly gaining importance throughout different fields of application. Starting with obvious domains like computer games or movies there are also lots of other areas, e.g. reconstructions of historic cities both for educational reasons and further research. The most widely used method for producing city models is the „manual“ creation. A 3d artist uses modeling software to design every single component by hand. Especially for city models consisting of hundreds or thousands of buildings this is a very time consuming and thus expensive method. Procedural modeling offers an alternative to this manual approach by using a computer to generate models automatically. The history of procedural modeling algorithms goes back to the 1980s when the first implementations for the automatic texture synthesis were developed and published by Ken Perlin. Other important applications are the generation of plants based on formalisms like L-systems, proposed by Aristid Lindenmayer or particle systems widely used within computer graphics first proposed by William Reeves. Research concerning the applicability of the developed formalisms and techniques led to the creation of systems dedicated to the automatical computation of building and city models. These systems are often differentiated between rule-based and procedural systems. Rule-based systems use formalisms like text replacement systems whereas procedural systems implement every step of the construction process within the program code. The Semantic Building Modeler is a procedural system, which is configured by using user-provided XML-parameters. The semantic meaning of these parameters is fixed through a tight coupling with their usage within the program code. In this point, the semantic of the Semantic Building Modeler differs from other systems on the today’s market. Besides, it facilitates the introduction for novice users making their first experiences with procedural modeling. Concerning the algorithmic aspect the system proposes two new algorithms for the automatic creation and variation of building footprints. These enable the software to create automatically varied building structures. Additionally, the prototype implementation can be seen as an extendable framework. It offers a wide range of algorithms and methods, which can be used for future extensions of the current system. The prototype also contains an implementation of the Weighted-Straight-Skeleton-Algorithm, techniques for the distributed storage of configuration-fragments, the procedural construction of building components like cornice and many more. The prototypical realization of the developed algorithms is a proof-of-concept-implementation. It demonstrates that the usage of semantically based parameters for the procedural creation of complex and visually appealing geometry can go hand-in-hand. This opens the powerful algorithmic construction of building and city models to a big group of users who have no experience neither in the field of programming nor in the manual design of 3d models
    corecore