42 research outputs found

    Solid Modeling

    Get PDF
    To appear in the Encyclopedia of Electrical and Electronics Engineering, Ed. J. Webster, John Wiley & Sons, 1999.A solid model is a digital representation of the geometry of an existing or envisioned physical object. Solid models are used in many industries, from entertainment to health care. They play a major role in the discrete-part manufacturing industries, where precise models of parts and assemblies are created using solid modeling software or more general computer-aided design (CAD) systems. Solid modeling is an interdisciplinary field that involves a growing number of areas. Its objectives evolved from a deep understanding of the practices and requirements of the targeted application domains. Its formulation and rigor are based on mathematical foundations derived from general and algebraic topology, and from Euclidean, differential, and algebraic geometry. The computational aspects of solid modeling deal with efficient data structures and algorithms, and benefit from recent developments in the field of computational geometry. Efficient processing is essential, because the complexity of industrial models is growing faster than the performance of commercial workstations. Techniques for modeling and analyzing surfaces and for computing their intersections are important in solid modeling. This area of research, sometimes called computer aided geometric design, has strong ties with numerical analysis and differential geometry. Graphic user-interface (GUI) techniques also play a crucial role in solid modeling, since they determine the overall usability of the modeler and impace the user's productivity. There have always been strong symbiotic links and overlaps between the solid modeling community and the computer graphics community. Solid modeling interfaces are based on efficient three-dimensional (3D) graphics techniques, whereas research in 3D graphics focuses on fast or photo-realistic rendering of complex scenes, often composed of solid models, and on realistic or artistic animations of non-rigid objects. A similar symbiotic relation with computer vision is regaining popularity, as many research efforts in vision are model-based and attempt to extract 3D models from images or video sequences of existing parts or scenes. These efforts are particularly important for solid modeling, because the cost of manually designing solid models of existing objects or scenes far excees the other costs (hardware, software, maintenance, and training) associated with solid modeling. Finally, the growing complexity of solid models and the growing need for collaboration, reusability of design, and interoperability of software require expertise in distributed databases, constraint management systems, optimization techniques, object linking standards, and internet protocols. This report provides a brief overview of the solid modeling field, its fundamental technologies, and some important applications

    Degree-Driven Design of Geometric Algorithms for Point Location, Proximity, and Volume Calculation

    Get PDF
    Correct implementation of published geometric algorithms is surprisingly difficult. Geometric algorithms are often designed for Real-RAM, a computational model that provides arbitrary precision arithmetic operations at unit cost. Actual commodity hardware provides only finite precision and may result in arithmetic errors. While the errors may seem small, if ignored, they may cause incorrect branching, which may cause an implementation to reach an undefined state, produce erroneous output, or crash. In 1999 Liotta, Preparata and Tamassia proposed that in addition to considering the resources of time and space, an algorithm designer should also consider the arithmetic precision necessary to guarantee a correct implementation. They called this design technique degree-driven algorithm design. Designers who consider the time, space, and precision for a problem up-front arrive at new solutions, gain further insight, and find simpler representations. In this thesis, I show that degree-driven design supports the development of new and robust geometric algorithms. I demonstrate this claim via several new algorithms. For n point sites on a UxU grid I consider three problems. First, I show how to compute the nearest neighbor transform in O(U^2) expected time, O(U^2) space, and double precision. Second, I show how to create a data structure in O(n log Un) expected time, O(n) expected space, and triple precision that supports O(log n) time and double precision post-office queries. Third, I show how to compute the Gabriel graph in O(n^2) time, O(n^2) space and double precision. For computing volumes of CSG models, I describe a framework that uses a minimal set of predicates that use at most five-fold precision. The framework is over 500x faster and two orders of magnitude more accurate than a Monte Carlo volume calculation algorithm.Doctor of Philosoph

    High-Quality Simplification and Repair of Polygonal Models

    Get PDF
    Because of the rapid evolution of 3D acquisition and modelling methods, highly complex and detailed polygonal models with constantly increasing polygon count are used as three-dimensional geometric representations of objects in computer graphics and engineering applications. The fact that this particular representation is arguably the most widespread one is due to its simplicity, flexibility and rendering support by 3D graphics hardware. Polygonal models are used for rendering of objects in a broad range of disciplines like medical imaging, scientific visualization, computer aided design, film industry, etc. The handling of huge scenes composed of these high-resolution models rapidly approaches the computational capabilities of any graphics accelerator. In order to be able to cope with the complexity and to build level-of-detail representations, concentrated efforts were dedicated in the recent years to the development of new mesh simplification methods that produce high-quality approximations of complex models by reducing the number of polygons used in the surface while keeping the overall shape, volume and boundaries preserved as much as possible. Many well-established methods and applications require "well-behaved" models as input. Degenerate or incorectly oriented faces, T-joints, cracks and holes are just a few of the possible degenaracies that are often disallowed by various algorithms. Unfortunately, it is all too common to find polygonal models that contain, due to incorrect modelling or acquisition, such artefacts. Applications that may require "clean" models include finite element analysis, surface smoothing, model simplification, stereo lithography. Mesh repair is the task of removing artefacts from a polygonal model in order to produce an output model that is suitable for further processing by methods and applications that have certain quality requirements on their input. This thesis introduces a set of new algorithms that address several particular aspects of mesh repair and mesh simplification. One of the two mesh repair methods is dealing with the inconsistency of normal orientation, while another one, removes the inconsistency of vertex connectivity. Of the three mesh simplification approaches presented here, the first one attempts to simplify polygonal models with the highest possible quality, the second, applies the developed technique to out-of-core simplification, and the third, prevents self-intersections of the model surface that can occur during mesh simplification

    Boolean operations on 3D selective Nef complexes : data structure, algorithms, optimized implementation, experiments and applications

    Get PDF
    Nef polyhedra in d-dimensional space are the closure of half-spaces under boolean set operations. Consequently, they can represent non-manifold situations, open and closed sets, mixed-dimensional complexes, and they are closed under all boolean and topological operations, such as complement and boundary. The generality of Nef complexes is essential for some applications. In this thesis, we present a new data structure for the boundary representation of three-dimensional Nef polyhedra and efficient algorithms for boolean operations. We use exact arithmetic to avoid well known problems with floating-point arithmetic and handle all degeneracies. Furthermore, we present important optimizations for the algorithms, and evaluate this optimized implementation with extensive experiments. The experiments supplement the theoretical runtime analysisNef-Polyeder sind d-dimensionale Punktmengen, die durch eine endliche Anzahl boolescher Operationen über Halbräumen generiert werden. Sie sind abgeschlossen hinsichtlich boolescher und topologischer Operationen. Als Konsequenz daraus können sie nicht-mannigfaltige Situationen, offene und geschlossene Mengen und gemischt-dimensionale Komplexe darstellen. Die Allgemeinheit von Nef-Komplexen ist unentbehrlich für einige Anwendungen. In dieser Doktorarbeit stellen wir eine neue Datenstruktur vor, die eine Randdarstellung von dreidimensionalen Nef-polyedern und Algorithmen für boolesche Operationen realisiert. Wir benutzen exakte Arithmetik um die bekannten Probleme mit Gleitkommaarithmetik und Degeneriertheiten zu vermeiden. Außerdem präsentieren wir wichtige Optimierungen der Algorithmen und bewerten die optimierte Implementierung an Hand umfassender Experimente. Weitere Experimente belegen die theoretische Laufzeitanalyse und vergleichen unsere Implementation mit dem kommerziellen CAD kernel ACIS. ACIS is meistens bis zu sechs mal schneller, aber es gibt auch Beispiele bei denen ACIS scheitert. Nef-Polyeder können bei einer Vielzahl von Anwendungen eingesetzt werden. Wir präsentieren einfache Implementationen zweier Anwendungen - von der visuellen Hülle und von der Minkowski-Summe zwei abgeschlossener Nef-Polyeder

    Fast and Accurate Visibility Preprocessing

    Get PDF
    Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently

    AMP-CAD: Automatic Assembly Motion Planning Using C AD Models of Parts

    Get PDF
    Assembly with robots involves two kinds of motions, those that are point-to-point and those that are force/torque guided, the former kind of motions being faster and more amenable to automatic planning and the latter kind being necessary for dealing with tight clearances. In this paper, we describe an assembly motion planning system that uses descriptions of assemblies and CAD models of parts to automatically figure out which motions should be point-to-point and which motions should be force/torque guided. Our planner uses graph search over a potential field representation of parts to calculate candidate assembly paths. Given the tolerances of the parts and other uncertainties, these paths are then analyzed for the likelihood of collisions. Those path segments that are prone to collisions are then marked for execution under force/torque control. The calculation of the various motions is facilitated by an object-oriented and feature-based assembly representation. A highlight of this representation is the manner in which tolerance information is taken into account: Representation of, say, a part contains a pointer to the boundary representation of the part in its most material condition form. As first defined by Requicha, the most material condition form of a geometric entity is obtained by expanding all the convexities and shrinking all the concavities by relevant tolerances. An integral part of the assembly motion planner is the execution unit. Residing in this unit is knowledge of the different types of automatic EDR (error detection and recovery) strategies. Therefore, during the execution of the force/torque guided motion, this unit invokes the EDR strategies appropriate to the geometric constraints relevant to the motion. This system, called AMP-CAD, has been experimentally verified using a Cincinnati Milacron T3-726 robot and a Puma 762 robot on a variety of assemblies

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    Contribution to structural parameters computation: volume models and methods

    Get PDF
    Bio-CAD and in-silico experimentation are getting a growing interest in biomedical applications where scientific data coming from real samples are used to compute structural parameters that allow to evaluate physical properties. Non-invasive imaging acquisition technologies such as CT, mCT or MRI, plus the constant growth of computer capabilities, allow the acquisition, processing and visualization of scientific data with increasing degree of complexity. Structural parameters computation is based on the existence of two phases (or spaces) in the sample: the solid, which may correspond to the bone or material, and the empty or porous phase and, therefore, they are represented as binary volumes. The most common representation model for these datasets is the voxel model, which is the natural extension to 3D of 2D bitmaps. In this thesis, the Extreme Vertices Model (EVM) and a new proposed model, the Compact Union of Disjoint Boxes (CUDB), are used to represent binary volumes in a much more compact way. EVM stores only a sorted subset of vertices of the object¿s boundary whereas CUDB keeps a compact list of boxes. In this thesis, methods to compute the next structural parameters are proposed: pore-size distribution, connectivity, orientation, sphericity and roundness. The pore-size distribution helps to interpret the characteristics of porous samples by allowing users to observe most common pore diameter ranges as peaks in a graph. Connectivity is a topological property related to the genus of the solid space, measures the level of interconnectivity among elements, and is an indicator of the biomechanical characteristics of bone or other materials. The orientation of a shape can be defined by rotation angles around a set of orthogonal axes. Sphericity is a measure of how spherical is a particle, whereas roundness is the measure of the sharpness of a particle's edges and corners. The study of these parameters requires dealing with real samples scanned at high resolution, which usually generate huge datasets that require a lot of memory and large processing time to analyze them. For this reason, a new method to simplify binary volumes in a progressive and lossless way is presented. This method generates a level-of-detail sequence of objects, where each object is a bounding volume of the previous objects. Besides being used as support in the structural parameter computation, this method can be practical for task such as progressive transmission, collision detection and volume of interest computation. As part of multidisciplinary research, two practical applications have been developed to compute structural parameters of real samples. A software for automatic detection of characteristic viscosity points of basalt rocks and glasses samples, and another to compute sphericity and roundness of complex forms in a silica dataset.El Bio-Diseño Asistido por Computadora (Bio-CAD), y la experimentacion in-silico est an teniendo un creciente interes en aplicaciones biomedicas, en donde se utilizan datos cientificos provenientes de muestras reales para calcular par ametros estructurales que permiten evaluar propiedades físicas. Las tecnologías de adquisicion de imagen no invasivas como la TC, TC o IRM, y el crecimiento constante de las prestaciones de las computadoras, permiten la adquisicion, procesamiento y visualizacion de datos científicos con creciente grado de complejidad. El calculo de parametros estructurales esta basado en la existencia de dos fases (o espacios) en la muestra: la solida, que puede corresponder al hueso o material, y la fase porosa o vacía, por tanto, tales muestras son representadas como volumenes binarios. El modelo de representacion mas comun para estos conjuntos de datos es el modelo de voxeles, el cual es una extension natural a 3D de los mapas de bits 2D. En esta tesis se utilizan el modelo Extreme Verrtices Model (EVM) y un nuevo modelo propuesto, the Compact Union of Disjoint Boxes (CUDB), para representar los volumenes binarios en una forma mucho mas compacta. El modelo EVM almacena solo un subconjunto ordenado de vertices de la frontera del objeto mientras que el modelo CUDB mantiene una lista compacta de cajas. En esta tesis se proponen metodos para calcular los siguientes parametros estructurales: distribucion del tamaño de los poros, conectividad, orientacion, esfericidad y redondez. La distribucion del tamaño de los poros ayuda a interpretar las características de las muestras porosas permitiendo a los usuarios observar los rangos de diametro mas comunes de los poros mediante picos en un grafica. La conectividad es una propiedad topologica relacionada con el genero del espacio solido, mide el nivel de interconectividad entre los elementos, y es un indicador de las características biomecanicas del hueso o de otros materiales. La orientacion de un objeto puede ser definida por medio de angulos de rotacion alrededor de un conjunto de ejes ortogonales. La esfericidad es una medida de que tan esferica es una partícula, mientras que la redondez es la medida de la nitidez de sus aristas y esquinas. En el estudio de estos parametros se trabaja con muestras reales escaneadas a alta resolucion que suelen generar conjuntos de datos enormes, los cuales requieren una gran cantidad de memoria y mucho tiempo de procesamiento para ser analizados. Por esta razon, se presenta un nuevo metodo para simpli car vol umenes binarios de una manera progresiva y sin perdidas. Este metodo genera una secuencia de niveles de detalle de los objetos, en donde cada objeto es un volumen englobante de los objetos previos. Ademas de ser utilizado como apoyo en el calculo de parametros estructurales, este metodo puede ser de utilizado en otras tareas como transmision progresiva, deteccion de colisiones y calculo de volumen de interes. Como parte de una investigacion multidisciplinaria, se han desarrollado dos aplicaciones practicas para calcular parametros estructurales de muestras reales. Un software para la deteccion automatica de puntos de viscosidad característicos en muestras de rocas de basalto y vidrios, y una aplicacion para calcular la esfericidad y redondez de formas complejas en un conjunto de datos de sílice

    Planar Nef polyhedra and generic higher-dimensional geometry

    Get PDF
    We present two generic software projects that are part of the software library CGAL. The first part described the design of a geometry kernel for higher-dimensional Euclidian geometry and the interaction with application programs. We describe software structures, interface concepts, and their models that are based on cooordinate representation, number types, and memory layout. In the higher-dimensional software kernel the interaction between linear algebra and geometric objects and primitves is one important facet. In the actual design our users can replace number types, representation types, and the traits classes that inflate kernel functionality into our current application programs: higher-dimensional convex hulls and Delaunay tedrahedralisations. In the second part we present the realization of planar Nef polyhedra. The concept of Nef polyhedra subsumes all kinds of rectilinear polyhedral subdivisions and is therefore of general applicability within a geometric software library. The software is based on the theory of extended points and segments that allows us to reuse classical algorithmic solutions like plane sweep to realize binary operations of Nef polyhedra.Wir präsentieren zwei Softwareprojekte, die Teil der Softwarebibliothek CGAL sind. Der erste Teil beschreibt den Entwurf eines Geometriekerns für höherdimensionale euklidische Geometrie und dessen Interaktion mit Anwendungsprogrammen. Wir beschreiben die Softwarestruktur, die auf der Herausarbeitung von Schnittstellenkonzepten und ihren Modellen hinsichtlich Koordinatenrepräsentation, Zahlentypen und Speicherablage beruht. Dabei spielt im Höherdimensionalen die Interaktion zwischen linearer Algebra und den entsprechenden geometrischen Objekten und primitiven Operationen eine wesentliche Rolle. Unser Entwurf erlaubt das Auswechseln von Zahlentypen, Repräsentations- und Traitsklassen bei der Berechnung von d-dimensionalen konvexen Hüllen und Delaunay-Simplexzerlegungen. Im zweiten Teil stellen wir die Realisierung von planaren Nef-Polyedern vor. Das Konzept der Nef-Polyeder umfasst alle linear-polyedrisch begrenzten Unterteilungen. Wir beschreiben ein Softwaremodul das umfassende Funktionalität zur Verfügung stellt. Als theoretische Grundlage des Entwurfs dient die Theorie erweiterter Punkte und Segmente, die es uns erlaubt, vorhandene Algorithmen wie z.B. Plane-Sweep zur Realisierung binärer Operationen von Nef-Polyedern zu nutzen
    corecore