12 research outputs found
A Probabilistic Analysis of the Power of Arithmetic Filters
The assumption of real-number arithmetic, which is at the basis of
conventional geometric algorithms, has been seriously challenged in recent
years, since digital computers do not exhibit such capability.
A geometric predicate usually consists of evaluating the sign of some
algebraic expression. In most cases, rounded computations yield a reliable
result, but sometimes rounded arithmetic introduces errors which may invalidate
the algorithms. The rounded arithmetic may produce an incorrect result only if
the exact absolute value of the algebraic expression is smaller than some
(small) varepsilon, which represents the largest error that may arise in the
evaluation of the expression. The threshold varepsilon depends on the structure
of the expression and on the adopted computer arithmetic, assuming that the
input operands are error-free.
A pair (arithmetic engine,threshold) is an "arithmetic filter". In this paper
we develop a general technique for assessing the efficacy of an arithmetic
filter. The analysis consists of evaluating both the threshold and the
probability of failure of the filter.
To exemplify the approach, under the assumption that the input points be
chosen randomly in a unit ball or unit cube with uniform density, we analyze
the two important predicates "which-side" and "insphere". We show that the
probability that the absolute values of the corresponding determinants be no
larger than some positive value V, with emphasis on small V, is Theta(V) for
the which-side predicate, while for the insphere predicate it is Theta(V^(2/3))
in dimension 1, O(sqrt(V)) in dimension 2, and O(sqrt(V) ln(1/V)) in higher
dimensions. Constants are small, and are given in the paper.Comment: 22 pages 7 figures Results for in sphere test inproved in
cs.CG/990702
Recent progress in exact geometric computation
AbstractComputational geometry has produced an impressive wealth of efficient algorithms. The robust implementation of these algorithms remains a major issue. Among the many proposed approaches for solving numerical non-robustness, Exact Geometric Computation (EGC) has emerged as one of the most successful. This survey describes recent progress in EGC research in three key areas: constructive zero bounds, approximate expression evaluation and numerical filters
Smoothing the gap between NP and ER
We study algorithmic problems that belong to the complexity class of the
existential theory of the reals (ER). A problem is ER-complete if it is as hard
as the problem ETR and if it can be written as an ETR formula. Traditionally,
these problems are studied in the real RAM, a model of computation that assumes
that the storage and comparison of real-valued numbers can be done in constant
space and time, with infinite precision. The complexity class ER is often
called a real RAM analogue of NP, since the problem ETR can be viewed as the
real-valued variant of SAT.
In this paper we prove a real RAM analogue to the Cook-Levin theorem which
shows that ER membership is equivalent to having a verification algorithm that
runs in polynomial-time on a real RAM. This gives an easy proof of
ER-membership, as verification algorithms on a real RAM are much more versatile
than ETR-formulas. We use this result to construct a framework to study
ER-complete problems under smoothed analysis. We show that for a wide class of
ER-complete problems, its witness can be represented with logarithmic
input-precision by using smoothed analysis on its real RAM verification
algorithm. This shows in a formal way that the boundary between NP and ER
(formed by inputs whose solution witness needs high input-precision) consists
of contrived input. We apply our framework to well-studied ER-complete
recognition problems which have the exponential bit phenomenon such as the
recognition of realizable order types or the Steinitz problem in fixed
dimension.Comment: 31 pages, 11 figures, FOCS 2020, SICOMP 202
Computação exata em geometria projetiva orientada e tratamento de degenerações
Orientador: Pedro J. de RezendeDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Não informado.Abstract: One of the greatest challenges in computational geometry today is to build the bridge between theory and practice which requires tools for the robust implementation of the algorithms that populate the literature. We attempt here to contribute a step in this direction. In the first part of this thesis, we present an extension to the technique of symbolic perturbation for oriented projective geometry. We describe the implementation of a library based on this technique which consists of geometric primitives sufficient for programming a large class of robust geometric algorithms, using exact arithmetic. In the second part, we describe the design of GeoPrO: a distributed programming environment for geometric visualization. We present an overview of its classes that allow for easy extensibility and portability. Due to a client-server architecture, comprised of a kernel, with multiple contexts, applications and visualizers, GeoPrO supports distributed execution over a heterogeneous network. Visualizers are currently available for the planar and the spheric models of the oriented projective geometry, running on Silicon Graphics workstations, while another is being implemented in Java for multi-platform support.MestradoMestre em Ciência da Computaçã
Incompressible Lagrangian fluid flow with thermal coupling
In this monograph is presented a method for the solution of an incompressible viscous fluid flow with heat transfer and solidification usin a fully Lagrangian description on the motion. The originality of this method consists in assembling various concepts and techniques which appear naturally due to the Lagrangian formulation.Postprint (published version
High-Quality Simplification and Repair of Polygonal Models
Because of the rapid evolution of 3D acquisition and modelling methods, highly complex and detailed polygonal models with constantly increasing polygon count are used as three-dimensional geometric representations of objects in computer graphics and engineering applications. The fact that this particular representation is arguably the most widespread one is due to its simplicity, flexibility and rendering support by 3D graphics hardware. Polygonal models are used for rendering of objects in a broad range of disciplines like medical imaging, scientific visualization, computer aided design, film industry, etc. The handling of huge scenes composed of these high-resolution models rapidly approaches the computational capabilities of any graphics accelerator. In order to be able to cope with the complexity and to build level-of-detail representations, concentrated efforts were dedicated in the recent years to the development of new mesh simplification methods that produce high-quality approximations of complex models by reducing the number of polygons used in the surface while keeping the overall shape, volume and boundaries preserved as much as possible. Many well-established methods and applications require "well-behaved" models as input. Degenerate or incorectly oriented faces, T-joints, cracks and holes are just a few of the possible degenaracies that are often disallowed by various algorithms. Unfortunately, it is all too common to find polygonal models that contain, due to incorrect modelling or acquisition, such artefacts. Applications that may require "clean" models include finite element analysis, surface smoothing, model simplification, stereo lithography. Mesh repair is the task of removing artefacts from a polygonal model in order to produce an output model that is suitable for further processing by methods and applications that have certain quality requirements on their input. This thesis introduces a set of new algorithms that address several particular aspects of mesh repair and mesh simplification. One of the two mesh repair methods is dealing with the inconsistency of normal orientation, while another one, removes the inconsistency of vertex connectivity. Of the three mesh simplification approaches presented here, the first one attempts to simplify polygonal models with the highest possible quality, the second, applies the developed technique to out-of-core simplification, and the third, prevents self-intersections of the model surface that can occur during mesh simplification
On the behavior of spherical and non-spherical grain assemblies, its modeling and numerical simulation
This thesis deals with the numerical modeling and simulation of granular media with large populations of non-spherical particles. Granular media are highly pervasive in nature and play an important role in technology. They are present in fields as diverse as civil engineering, food processing, and the pharmaceutical industry. For the physicist, they raise many challenging questions. They can behave like solids, as well as liquids or even gases and at times as none of these. Indeed, phenomena like granular segregation, arching effects or pattern formation are specific to granular media, hence often they are considered as a fourth state of matter. Around the turn of the century, the increasing availability of large computers made it possible to start investigating granular matter by using numerical modeling and simulation. Most numerical models were originally designed to handle spherical particles. However, making it possible to process non-spherical particles has turned out to be of utmost importance. Indeed, it is such grains that one finds in nature and many important phenomena cannot be reproduced just using spherical grains. This is the motivation for the research of the present thesis. Subjects in several fields are involved. The geometrical modeling of the particles and the simulation methods require discrete geometry results. A wide range of particle shapes is proposed. Those shapes, spheropolyhedra, are Minkowski sums of polyhedra and spheres and can be seen as smoothed polyhedra. Next, a contact detection algorithm is proposed that uses triangulations. This algorithm is a generalization of a method already available for spheres. It turns out that this algorithm relies on a positive answer to an open problem of computational geometry, the connectivity of the flip-graph of all triangulations. In this thesis it has been shown that the flip-graph of regular triangulations that share a same vertex set is connected. The modeling of contacts requires physics. Again the contact model we propose is based on the existing molecular dynamics model for contacts between spheres. Those models turn out to be easily generalizable to smoothed polyhedra, which further motivates this choice of particle shape. The implementation of those methods requires computer science. An implementation of this simulation methods for granular media composed of non-spherical particles was carried out based on the existing C++ code by J.-A. Ferrez that originally handled spherical particles. The resulting simulation code was used to gain insight into the behavior of granular matter. Three experiments are presented that have been numerically carried out with our models. The first of these experiments deals with the flowability (i. e. the ability to flow) of powders. The flowability of bidisperse bead assemblies was found to depend only on their mass-average diameters. Next, an experiment of vibrating rods inside a cylindrical container shows that under appropriate conditions they will order vertically. Finally, experiments investigating the shape segregation of sheres and spherotetrahedra are perfomed. Unexpectedly they are found to mix
Geração de malhas para domĂnios 2,5 dimensionais usando triangulação de delaunay restrita
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro TecnolĂłgico, Programa de PĂłs-graduação em Engenharia Mecânica, FlorianĂłpolis, 2001.Gerar uma malha consiste em discretizar um domĂnio geomĂ©trico em pequenos elementos de forma geomĂ©trica simplificada, como triângulos e/ou quadriláteros, em duas dimensões, e tetraedros e/ou hexaedros em trĂŞs dimensões. Malhas sĂŁo utilizadas em diversas áreas, como geologia, geografia e cartografia, onde elas fornecem uma representação compacta dos dados do terreno; em computação gráfica, a grande maioria dos objetos sĂŁo mapeados e imagens; e, em matemática aplicada e computação cientĂfica, sĂŁo essenciais na solução numĂ©rica de equações diferenciais parciais, resultantes do modelamento de problemas fĂsicos. Este trabalho concentra-se no desenvolvimento de um gerador de malhas voltadas para esta Ăşltima aplicação, mas que podem, tambĂ©m, ser empregadas nas outras áreas. Mais especificamente, o interesse está na geração de malhas de triângulos nĂŁo-estruturadas, atravĂ©s do processo de triangulação de Delaunay, para aplicações na solução de problemas de transferĂŞncia de calor em superfĂcies planas tridimensionais. Devido Ă utilização do mĂ©todo CVFEM (Control Volume based Finite Element Method) para a modelagem numĂ©rica, um paralelo entre a Triangulação de Delaunay e Diagramas de Voronoi Ă© delineado, apresentando suas propriedades e aplicaçõe .SĂŁo estudados os mĂ©todos de geração de triangulações de Delaunay para superfĂcies planas de inversĂŁo de aresta, divide-and-conquer e incremental. A estrutura de dados utilizada Ă© a triangular, e o mĂ©todo de refino para garantia de qualidade de malha Ă© baseado no algoritmo de Ruppert. Restrições geomĂ©tricas sĂŁo tratadas de forma que a malha gerada obedeça as intersecções e conexões entre diversas superfĂcies. A contribuição fundamental do presente trabalho está na extensĂŁo de mĂ©todos de riangulação de Delaunay e de refino de malha bidimensionais para domĂnios 2,5 dimensionais compostos, isto Ă© mĂşltiplos planos interconectados no espaço tridimensional tratados simultaneamente. Otimização de ângulos internos, tamanho e forma dos elementos atravĂ©s da especificação de parâmetros, conferem ao gerador desenvolvido versatilidade e generalidade