22 research outputs found
Constrained Texture Mapping And Foldover-free Condition
Texture mapping has been widely used in image
processing and graphics to enhance the realism of CG scenes.
However to perfectly match the feature points of a 3D model
with the corresponding pixels in texture images, the
parameterisation which maps a 3D mesh to the texture space
must satisfy the positional constraints. Despite numerous
research efforts, the construction of a mathematically robust
foldover-free parameterisation subject to internal constraints
is still a remaining issue. In this paper, we address this
challenge by developing a two-step parameterisation method.
First, we produce an initial parameterisation with a method
traditionally used to solve structural engineering problems,
called the bar-network. We then derive a mathematical
foldover-free condition, which is incorporated into a Radial
Basis Function based scheme. This method is therefore able to
guarantee that the resulting parameterization meets the hard
constraints without foldovers
An RBF-based reparameterization method for constrained texture mapping
Texture mapping has long been used in computer graphics to
enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous
research efforts, the construction of a mathematically robust, foldoverâfree parameterization that is subject to
positional constraints continues to be a challenge. In the
present paper, this foldover problem is addressed by developing radial basis function (RBF) based reparameterization. Given initial 2D embedding of a 3D
surface, the proposed method can reparameterize 2D embedding into a foldover âfree 2D mesh, satisfying a set
of userâspecified constraint points. In addition, this approach is meshâfree. Therefore, generating smooth texture
mapping results is possible without extra smoothing optimization
Implicit Decals: Interactive Editing of Repetitive Patterns on Surfaces
11 pagesInternational audienceTexture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals. A spherical field function is used to determine the position and the size of each decal and the deformation applied to fit the decals. The decals may span multiple objects with heterogeneous representations. Our method does not require an explicit parameterization of the model. As such, varieties of patterns including repeated patterns like rocks, tiles, and scales can be mapped. We have implemented the method using the GPU where placement, size, and orientation of thousands of decals are manipulated in real time
Mapping textures on 3d terrains: a hybrid cellular automata approach
It is a time consuming task to generate textures for large 3D terrain surfaces in
computer games, flight simulations and computer animations. This work explores the
use of cellular automata in the automatic generation of textures for large surfaces. I
propose a method for generating textures for 3D terrains using various approaches - in
particular, a hybrid approach that integrates the concepts of cellular automata,
probabilistic distribution according to height and Wang tiles. I also look at other hybrid
combinations using cellular automata to generate textures for 3D terrains. Work for this
thesis includes development of a tool called "Texullar" that allows users to generate
textures for 3D terrain surfaces by configuring various input parameters and choosing
cellular automata rules.
I evaluate the effectiveness of the approach by conducting a user survey to
compare the results obtained by using different inputs and analyzing the results. The
findings show that incorporating concepts of cellular automata in texture generation for
terrains can lead to better results than random generation of textures. The analysis also
reveals that incorporating height information along with cellular automata yields better
results than using cellular automata alone. Results from the user survey indicate that a hybrid approach incorporating height information along with cellular automata and
Wang tiles is better than incorporating height information along with cellular automata
in the context of texture generation for 3D meshes.
The survey did not yield enough evidence to suggest whether the use of Wang
tiles in combination with cellular automata and probabilistic distribution according to
height results in a higher mean score than the use of only cellular automata and
probabilistic distribution. However, this outcome could have been influenced by the fact
that the survey respondents did not have information about the parameters used to
generate the final image - such as probabilistic distributions, the population
configurations and rules of the cellular automata
ARTIST-DRIVEN FRACTURING OF POLYHEDRAL SURFACE MESHES
This paper presents a robust and artist driven method for fracturing a surface polyhedral mesh via fracture maps. A fracture map is an undirected simple graph with nodes representing positions in UV-space and fracture lines along the surface of a mesh. Fracture maps allow artists to concisely and rapidly define, edit, and apply fracture patterns onto the surface of their mesh.
The method projects a fracture map onto a polyhedral surface and splits its triangles accordingly. The polyhedral mesh is then segmented based on fracture lines to produce a set of independent surfaces called fracture components, containing the visible surface of each fractured mesh fragment. Subsequently, we utilize a Voronoi-based approximation of the input polyhedral meshâs medial axis to derive a hidden surface for each fragment. The result is a new watertight polyhedral mesh representing the full fracture component.
Results are aquired after a delay sufficiently brief for interactive design. As the size of the input mesh increases, the computation time has shown to grow linearly. A large mesh of 41,000 triangles requires approximately 3.4 seconds to perform a complete fracture of a complex pattern. For a wide variety of practices, the resulting fractures allows users to provide realistic feedback upon the application of extraneous forces
Filtered Blending und Floating Textures: Projektive Texturierung mit multiplen Bildern ohne Geisterartefakte
Whenever approximate 3D geometry is projectively texture-mapped from different directions simultaneously, annoyingly visible aliasing artifacts are the result. To prevent such ghosting in projective texturing and image-based rendering, we propose two different GPU-based rendering strategies: filtered blending and floating textures. Either approach is able to cope with imprecise 3D geometry as well as inexact camera calibration. Ghosting artifacts are effectively eliminated at real-time rendering frame rates on standard graphics hardware. With the proposed rendering techniques, better-quality rendering results are obtained from fewer images, coarser 3D geometry, and less accurately calibrated images.Jedesmal wenn eine grob approximierte Geometrie eines Objektes simultan, projektiv texturiert wird aus verschiedenen Ansichten, treten hĂ€Ăliche Aliasing-Artefakte auf. Um diese Geisterartefakte bei projektiver Texturierung und bildbasiertem Rendering zu verhindern, schlagen wir zwei verschiedene, GPU-basierte Renderingstrategien vor: Filtered Blending und Floating Textures. Beide beheben die Probleme ungenauer 3D Geometrie und inexakter Kamerakalibrierung. Geisterartefakte werden in Echtzeit effektiv entfernt unter Verwendung von standard Graphikhardware. Mittels der vorgeschlagenen Renderingtechniken erreichen wir eine deutlich höhere QualitĂ€t der Ausgabebilder, bei gleichzeitig weniger Bildern, gröberer 3D Geometrie und weniger akkurat kalibrierten Bildern
ZIPMAPS: Zoom-in-bestimmte-Bereiche Texturen
In this technical report, we propose a method for rendering highly detailed close-up views of arbitrary textured surfaces. To augment the texture map locally with high-resolution information, we describe how to automatically, seamlessly merge unregistered images of different scales. Our hierarchical texture representation can easily be rendered in real-time, enabling zooming into specific texture regions to almost arbitrary magnification. Our method is useful wherever close-up renderings of specific regions shall be possible, without the need for excessively large texture maps.Wir prĂ€sentieren eine neue Methode um sehr detailierte Ansichten von beliebig texturierten OberflĂ€chen zu generieren. Wir beschreiben wie man automatisch und ohne sichtbare NĂ€hte unregistrierte Bilder unterschiedlicher Skalen miteinander kombiniert um lokal hochaufgelöste Detailinformationen hinzuzufĂŒgen. Unsere hierarchische TexturreprĂ€sentation kann sehr einfach und in Echtzeit gerendert werden und erlaubt somit den Zoom in bestimmte Textureregionen mit nahezu beliebiger VergröĂerung. Unsere Methode ist immer dann sinnvoll, wenn VergröĂerungen entsprechender Bereiche notwendig sind, ohne dass man entsprechend groĂe Texturen speichern möchte