177 research outputs found

    The diffuse Nitsche method: Dirichlet constraints on phase-field boundaries

    Get PDF
    We explore diffuse formulations of Nitsche's method for consistently imposing Dirichlet boundary conditions on phase-field approximations of sharp domains. Leveraging the properties of the phase-field gradient, we derive the variational formulation of the diffuse Nitsche method by transferring all integrals associated with the Dirichlet boundary from a geometrically sharp surface format in the standard Nitsche method to a geometrically diffuse volumetric format. We also derive conditions for the stability of the discrete system and formulate a diffuse local eigenvalue problem, from which the stabilization parameter can be estimated automatically in each element. We advertise metastable phase-field solutions of the Allen-Cahn problem for transferring complex imaging data into diffuse geometric models. In particular, we discuss the use of mixed meshes, that is, an adaptively refined mesh for the phase-field in the diffuse boundary region and a uniform mesh for the representation of the physics-based solution fields. We illustrate accuracy and convergence properties of the diffuse Nitsche method and demonstrate its advantages over diffuse penalty-type methods. In the context of imaging based analysis, we show that the diffuse Nitsche method achieves the same accuracy as the standard Nitsche method with sharp surfaces, if the inherent length scales, i.e., the interface width of the phase-field, the voxel spacing and the mesh size, are properly related. We demonstrate the flexibility of the new method by analyzing stresses in a human vertebral body

    Structure in the 3D Galaxy Distribution: I. Methods and Example Results

    Full text link
    Three methods for detecting and characterizing structure in point data, such as that generated by redshift surveys, are described: classification using self-organizing maps, segmentation using Bayesian blocks, and density estimation using adaptive kernels. The first two methods are new, and allow detection and characterization of structures of arbitrary shape and at a wide range of spatial scales. These methods should elucidate not only clusters, but also the more distributed, wide-ranging filaments and sheets, and further allow the possibility of detecting and characterizing an even broader class of shapes. The methods are demonstrated and compared in application to three data sets: a carefully selected volume-limited sample from the Sloan Digital Sky Survey redshift data, a similarly selected sample from the Millennium Simulation, and a set of points independently drawn from a uniform probability distribution -- a so-called Poisson distribution. We demonstrate a few of the many ways in which these methods elucidate large scale structure in the distribution of galaxies in the nearby Universe.Comment: Re-posted after referee corrections along with partially re-written introduction. 80 pages, 31 figures, ApJ in Press. For full sized figures please download from: http://astrophysics.arc.nasa.gov/~mway/lss1.pd

    Methods for Automated Creation and Efficient Visualisation of Large-Scale Terrains based on Real Height-Map Data

    Get PDF
    Real-time rendering of large-scale terrains is a difficult problem and remains an active field of research. The massive scale of these landscapes, where the ratio between the size of the terrain and its resolution is spanning multiple orders of magnitude, requires an efficient level of detail strategy. It is crucial that the geometry, as well as the terrain data, are represented seamlessly at varying distances while maintaining a constant visual quality. This thesis investigates common techniques and previous solutions to problems associated with the rendering of height field terrains and discusses their benefits and drawbacks. Subsequently, two solutions to the stated problems are presented, which build and expand upon the state-of-the-art rendering methods. A seamless and efficient mesh representation is achieved by the novel Uniform Distance-Dependent Level of Detail (UDLOD) triangulation method. This fully GPU-based algorithm subdivides a quadtree covering the terrain into small tiles, which can be culled in parallel, and are morphed seamlessly in the vertex shader, resulting in a densely and temporally consistent triangulated mesh. The proposed Chunked Clipmap combines the strengths of both quadtrees and clipmaps to enable efficient out-of-core paging of terrain data. This data structure allows for constant time view-dependent access, graceful degradation if data is unavailable, and supports trilinear and anisotropic filtering. Together these, otherwise independent, techniques enable the rendering of large-scale real-world terrains, which is demonstrated on a dataset encompassing the entire Free State of Saxony at a resolution of one meter, in real-time

    Generative Mesh Modeling

    Get PDF
    Generative Modeling is an alternative approach for the description of three-dimensional shape. The basic idea is to represent a model not as usual by an agglomeration of geometric primitives (triangles, point clouds, NURBS patches), but by functions. The paradigm change from objects to operations allows for a procedural representation of procedural shapes, such as most man-made objects. Instead of storing only the result of a 3D construction, the construction process itself is stored in a model file. The generative approach opens truly new perspectives in many ways, among others also for 3D knowledge management. It permits for instance to resort to a repository of already solved modeling problems, in order to re-use this knowledge also in different, slightly varied situations. The construction knowledge can be collected in digital libraries containing domain-specific parametric modeling tools. A concrete realization of this approach is a new general description language for 3D models, the "Generative Modeling Language" GML. As a Turing-complete "shape programming language" it is a basis of existing, primitv based 3D model formats. Together with its Runtime engine the GML permits - to store highly complex 3D models in a compact form, - to evaluate the description within fractions of a second, - to adaptively tesselate and to interactively display the model, - and even to change the models high-level parameters at runtime.Die generative Modellierung ist ein alternativer Ansatz zur Beschreibung von dreidimensionaler Form. Zugrunde liegt die Idee, ein Modell nicht wie üblich durch eine Ansammlung geometrischer Primitive (Dreiecke, Punkte, NURBS-Patches) zu beschreiben, sondern durch Funktionen. Der Paradigmenwechsel von Objekten zu Geometrie-erzeugenden Operationen ermöglicht es, prozedurale Modelle auch prozedural zu repräsentieren. Statt das Resultat eines 3D-Konstruktionsprozesses zu speichern, kann so der Konstruktionsprozess selber repräsentiert werden. Der generative Ansatz eröffnet unter anderem gänzlich neue Perspektiven für das Wissensmanagement im 3D-Bereich. Er ermöglicht etwa, auf einen Fundus bereits gelöster Konstruktions-Aufgaben zurückzugreifen, um sie in ähnlichen, aber leicht variierten Situationen wiederverwenden zu können. Das Konstruktions-Wissen kann dazu in Form von Bibliotheken parametrisierter, Domänen-spezifischer Modellier-Werkzeuge gesammelt werden. Konkret wird dazu eine neue allgemeine Modell-Beschreibungs-Sprache vorgeschlagen, die "Generative Modeling Language" GML. Als Turing-mächtige "Programmiersprache für Form" stellt sie eine echte Verallgemeinerung existierender Primitiv-basierter 3D-Modellformate dar. Zusammen mit ihrer Runtime-Engine erlaubt die GML, - hochkomplexe 3D-Objekte extrem kompakt zu beschreiben, - die Beschreibung innerhalb von Sekundenbruchteilen auszuwerten, - das Modell adaptiv darzustellen und interaktiv zu betrachten, - und die Modell-Parameter interaktiv zu verändern

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    A Flexible Kernel for Adaptive Mesh Refinement on GPU

    Get PDF
    International audienceWe present a flexible GPU kernel for adaptive on-the-fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on-the-fly refinement is performed by the GPU, without any preprocessing nor additional topology data structure. The level of adaptive refinement can be controlled by specifying a per-vertex depth-tag, in addition to usual position, normal, color and texture coordinates. This depth-tag is used by the kernel to instanciate the correct refinement pattern. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine does neither require multi-pass rendering nor any use of fragment processing nor special preprocess of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities

    Pointshop 3D: An interactive system for point-based surface editing

    Get PDF
    We present a system for interactive shape and appearance editing of 3D point-sampled geometry. By generalizing conventional 2D pixel editors, our system supports a great variety of different interaction techniques to alter shape and appearance of 3D point models, including cleaning, texturing, sculpting, carving, filtering, and resampling. One key ingredient of our framework is a novel concept for interactive point cloud parameterization allowing for distortion minimal and aliasing-free texture mapping. A second one is a dynamic, adaptive resampling method which builds upon a continuous reconstruction of the model surface and its attributes. These techniques allow us to transfer the full functionality of 2D image editing operations to the irregular 3D point setting. Our system reads, processes, and writes point-sampled models without intermediate tesselation. It is intended to complement existing low cost 3D scanners and point rendering pipelines for efficient 3D content creation
    • …
    corecore