16 research outputs found

    Polymeco : uma ferramenta de análise e comparação de malhas poligonais

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesOs modelos definidos usando malhas poligonais são usados em diversas áreas de aplicação para representar diferentes objectos e estruturas. Dependendo da aplicação, pode ser necessário processar esses modelos, por exemplo, para diminuir a sua complexidade (simplificação). Este processamento introduz diferenças, em relação ao modelo original, cuja avaliação é um passo fundamental para permitir escolher a sequência de operações e os métodos de processamento que permitam a obtenção de melhores resultados. Apesar de algumas ferramentas de análise e comparação das características de malhas poligonais serem descritas na literatura, pouca atenção tem sido prestada à forma como os dados provenientes dessa análise e comparação podem ser visualizados. Para além disso, devem ser disponibilizadas várias funcionalidades de forma a permitir uma utilização sistemática destas ferramentas, assim como uma adequada análise e exploração dos dados fornecidos. O PolyMeCo — uma ferramenta de análise e comparação das características de malhas poligonais — foi projectado e desenvolvido tendo em conta os objectivos acima referidos. Através de um ambiente integrado onde diferentes opções de visualização estão disponíveis e podem ser usadas de forma coordenada, o PolyMeCo permite aos utilizadores uma melhor compreensão dos dados resultantes da aplicação dos números de mérito disponibilizados. Esta nova ferramenta foi usada com sucesso em dois trabalhos de investigação: (1) para comparar as características das malhas resultantes de dois algoritmos de simplificação de malhas poligonais, e (2) para testar a aplicabilidade dos números de mérito que disponibiliza como estimadores da qualidade de modelos poligonais, tal como percebida pelos utilizadores. ABSTRACT: Polygonal meshes are used in several application areas to model different objects and structures. Depending on the application, such models sometimes have to be processed to, for instance, reduce their complexity (mesh simplification). Such processing introduces error, whose evaluation is of paramount importance when choosing the sequence of operations that is to be applied for a particular purpose. Although some mesh analysis and comparison tools are described in the literature, little attention has been given to the way mesh features (analysis) and mesh comparison results can be visualized. Moreover, particular functionalities have to be made available by such tools, to enable systematic use and proper data analysis and exploration. PolyMeCo — a tool for polygonal mesh analysis and comparison — was designed and developed taking the above objectives into account. It enhances the way users perform mesh analysis and comparison, by providing an integrated environment where various mesh quality measures and several visualization options are available and can be used in a coordinated way, thus leading to greater insight into the visualized data. This new tool has been successfully applied in two research works: (1) to compare between mesh simplification algorithms, and (2) to study the applicability of the provided computational measures as estimators of user perceived quality as obtained through an observer study

    Malhas de polígonos para simulação de tecidos baseadas em regiões afins

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaEm computação gráfica, a representação de tecidos virtuais é, normalmente, constituída por malhas de triângulos ou quadriláteros, nas quais se podem definir as propriedades usadas no modelo físico (elasticidade, densidade, rigidez à curvatura, etc.). Quanto mais detalhada é uma malha, maior é o realismo na representação de um tecido real, levando também a um processo de simulação de tecidos mais moroso. Usando um simulador de tecidos com detalhe dinâmico alivia-se o processamento inerente a malhas mais detalhadas. Com este projecto pretendeu-se criar um conjunto de funcionalidades que disponibilizem um sistema de conversão de malhas poligonais quaisquer, modeladas de forma livre num programa de modelação, para malhas com um formato específico necessário ao simulador de tecidos com detalhe. A característica mais importante do trabalho resulta da capacidade dos algoritmos de conversão desenvolvidos serem capazes de preservar as características individuais dos modelos geométricos, tais como vincos, mapeamento de texturas ou a mistura de diferentes propriedades materiais em zonas diferenciadas do modelo. A ideia principal consiste na determinação de regiões de polígonos afins sobre o modelo original, nas quais se pode efectuar a conversão de forma livre. O sistema construído é flexível ao ponto de poder ser estendido para diferentes critérios de afinidade, incluindo critérios a definir de futuro. No final analisam-se os resultados obtidos pelos algoritmos desenvolvidos recorrendo, quer à inspecção visual dos resultados, quer à comparação metódica entre os modelos obtidos e o modelo original. São ainda analisados alguns indicadores que permitem aferir a qualidade das malhas assim obtidas

    SculptFlow: Visualizing Sculpting Sequences by Continuous Summarization

    Get PDF
    Digital sculpting is becoming ubiquitous for modeling organic shapes like characters. Artists commonly show their sculpting sessions by producing timelapses or speedup videos. But the long length of these sessions make these visualizations either too long to remain interesting or too fast to be useful. In this paper, we present SculptFlow, an algorithm that summarizes sculpted mesh sequences by repeatedly merging pairs of subsequent edits taking into account the number of summarized strokes, the magnitude of the edits, and whether they overlap. Summaries of any length are generated by stopping the merging process when the desired length is reached. We enhance the summaries by highlighting edited regions and drawing filtered strokes to indicate artists\u27 workflows. We tested SculptFlow by recording professional artists as they modeled a variety of meshes, from detailed heads to full bodies. When compared to speedup videos, we believe that SculptFlow produces more succinct and informative visualizations. We open source SculptFlow for artists to show their work and release all our datasets so that others can improve upon our work

    Surface Completion Using Laplacian Transform

    Get PDF
    Model acquisition process usually produce incomplete surfaces due to the technical constrains. This research presents the algorithm to perform surface completion using the available surface's context. Previous works on surface completions do not handle surfaces with near-regular pattern or irregular patterns well. The main goal of this research is to synthesize surface for hole that will have similar surface's context or geometric details as the hole's surrounding. This research uses multi-resolution approach to decompose the model into low-frequency part and high-frequency part. The low-frequency part is filled smoothly. The high-frequency part are transformed it into the Laplacian coordinate and filled using example-based synthesize approach. The algorithm is tested with planar surfaces and curve surfaces with all kind of relief patterns. The results indicate that the holes can be completed with the geometric detail similar to the surrounding surface

    3DFlow: Continuous Summarization of Mesh Editing Workflows

    Get PDF
    Mesh editing software is continually improving allowing more detailed meshes to be create efficiently by skilled artists. Many of these are interested in sharing not only the final mesh, but also their whole workflows both for creating tutorials as well as for showcasing the artist\u27s talent, style, and expertise. Unfortunately, while creating meshes is improving quickly, sharing editing workflows remains cumbersome since time-lapsed or sped-up videos remain the most common medium. In this paper, we present 3DFlow, an algorithm that computes continuous summarizations of mesh editing workflows. 3DFlow takes as input a sequence of meshes and outputs a visualization of the workflow summarized at any level of detail. The output is enhanced by highlighting edited regions and, if provided, overlaying visual annotations to indicated the artist\u27s work, e.g. summarizing brush strokes in sculpting. We tested 3DFlow with a large set of inputs using a variety of mesh editing techniques, from digital sculpting to low-poly modeling, and found 3DFlow performed well for all. Furthermore, 3DFlow is independent of the modeling software used since it requires only mesh snapshots, using additional information only for optional overlays. We open source 3DFlow for artists to showcase their work and release all our datasets so other researchers can improve upon our work

    A Comparative Study on Polygonal Mesh Simplification Algorithms

    Get PDF
    Polygonal meshes are a common way of representing three dimensional surface models in many different areas of computer graphics and geometry processing. However, with the evolution of the technology, polygonal models are becoming more and more complex. As the complexity of the models increase, the visual approximation to the real world objects get better but there is a trade-off between the cost of processing these models and better visual approximation. In order to reduce this cost, the number of polygons in a model can be reduced by mesh simplification algorithms. These algorithms are widely used such that nearly all of the popular mesh editing libraries include at least one of them. In this work, polygonal simplification algorithms that are embedded in open source libraries: CGAL, VTK and OpenMesh are compared with the Metro geometric error measuring tool. By this way we try to supply a guidance for developers for publicly available mesh libraries in order to implement polygonal mesh simplification

    Robust and parallel mesh reconstruction from unoriented noisy points.

    Get PDF
    Sheung, Hoi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (p. 65-70).Abstract also in Chinese.Abstract --- p.vAcknowledgements --- p.ixList of Figures --- p.xiiiList of Tables --- p.xvChapter 1 --- Introduction --- p.1Chapter 1.1 --- Main Contributions --- p.3Chapter 1.2 --- Outline --- p.3Chapter 2 --- Related Work --- p.5Chapter 2.1 --- Volumetric reconstruction --- p.5Chapter 2.2 --- Combinatorial approaches --- p.6Chapter 2.3 --- Robust statistics in surface reconstruction --- p.6Chapter 2.4 --- Down-sampling of massive points --- p.7Chapter 2.5 --- Streaming and parallel computing --- p.7Chapter 3 --- Robust Normal Estimation and Point Projection --- p.9Chapter 3.1 --- Robust Estimator --- p.9Chapter 3.2 --- Mean Shift Method --- p.11Chapter 3.3 --- Normal Estimation and Projection --- p.11Chapter 3.4 --- Moving Least Squares Surfaces --- p.14Chapter 3.4.1 --- Step 1: local reference domain --- p.14Chapter 3.4.2 --- Step 2: local bivariate polynomial --- p.14Chapter 3.4.3 --- Simpler Implementation --- p.15Chapter 3.5 --- Robust Moving Least Squares by Forward Search --- p.16Chapter 3.6 --- Comparison with RMLS --- p.17Chapter 3.7 --- K-Nearest Neighborhoods --- p.18Chapter 3.7.1 --- Octree --- p.18Chapter 3.7.2 --- Kd-Tree --- p.19Chapter 3.7.3 --- Other Techniques --- p.19Chapter 3.8 --- Principal Component Analysis --- p.19Chapter 3.9 --- Polynomial Fitting --- p.21Chapter 3.10 --- Highly Parallel Implementation --- p.22Chapter 4 --- Error Controlled Subsampling --- p.23Chapter 4.1 --- Centroidal Voronoi Diagram --- p.23Chapter 4.2 --- Energy Function --- p.24Chapter 4.2.1 --- Distance Energy --- p.24Chapter 4.2.2 --- Shape Prior Energy --- p.24Chapter 4.2.3 --- Global Energy --- p.25Chapter 4.3 --- Lloyd´ةs Algorithm --- p.26Chapter 4.4 --- Clustering Optimization and Subsampling --- p.27Chapter 5 --- Mesh Generation --- p.29Chapter 5.1 --- Tight Cocone Triangulation --- p.29Chapter 5.2 --- Clustering Based Local Triangulation --- p.30Chapter 5.2.1 --- Initial Surface Reconstruction --- p.30Chapter 5.2.2 --- Cleaning Process --- p.32Chapter 5.2.3 --- Comparisons --- p.33Chapter 5.3 --- Computing Dual Graph --- p.34Chapter 6 --- Results and Discussion --- p.37Chapter 6.1 --- Results of Mesh Reconstruction form Noisy Point Cloud --- p.37Chapter 6.2 --- Results of Clustering Based Local Triangulation --- p.47Chapter 7 --- Conclusions --- p.55Chapter 7.1 --- Key Contributions --- p.55Chapter 7.2 --- Factors Affecting Our Algorithm --- p.55Chapter 7.3 --- Future Work --- p.56Chapter A --- Building Neighborhood Table --- p.59Chapter A.l --- Building Neighborhood Table in Streaming --- p.59Chapter B --- Publications --- p.63Bibliography --- p.6

    3D skull models: a new craniometric approach

    Get PDF
    Mestrado em Sistemas de InformaçãoEsta dissertação apresenta uma nova abordagem para realizar análises craniométricas com base em modelos 3D de crânios. Atualmente o procedimento usado pelos antropólogos assenta no recurso a craniometria tradicional, i.e. medições manuais, o que implica variados problemas tais como dificuldade em assegurar repetibilidade das medições, erros na mesmas e possível dano nos crânios inerente ao seu manuseamento. A abordagem proposta passa por fazer a aquisição dos crânios recorrendo a um scanner 3D de luz estruturada (realizada por terceiros) e posterior análise recorrendo a uma aplicação especificamente desenvolvida para tal, e na qual assenta o trabalho descrito neste documento. Vários métodos serão abordados, tais como análise de malhas 3D, estudos de normais e curvaturas, obtenção de pontos de interesse e respectivas medidas e, por fim, serão apresentadas conclusões sobre o trabalho, bem como sugestões de trabalho futuro.This dissertation presents a new approach to conduct craniometric analysis based on 3D models of skulls. Nowadays procedures used by anthropologists are based in traditional methods, i.e. manual measurements, which may imply a set of problems such as difficulty in ensuring repeatability of the measurements, measurement errors and can skull damage inherent to the handling. The new approach lies on the acquisition of the skulls using a structured 3D light scanner (done by a third party entity) and subsequent analysis using an application specifically designed for that purpose. Is on the latter that this work is based. Several methods are going to be addressed, such as analysis of 3D meshes, studies of normal vectors and curvatures, obtainment of points of interest (landmark points) and measurements. Finally, conclusions about the developed methods, results and future work

    Parallel fluid dynamics for the film and animation industries

    Get PDF
    Includes bibliographical references (leaves 142-149).The creation of automated fluid effects for film and media using computer simulations is popular, as artist time is reduced and greater realism can be achieved through the use of numerical simulation of physical equations. The fluid effects in today’s films and animations have large scenes with high detail requirements. With these requirements, the time taken by such automated approaches is large. To solve this, cluster environments making use of hundreds or more CPUs have been used. This overcomes the processing power and memory limitations of a single computer and allows very large scenes to be created. One of the newer methods for fluid simulation is the Lattice Boltzmann Method (LBM). This is a cellular automata type of algorithm, which parallelizes easily. An important part of the process of parallelization is load balancing; the distribution of computation amongst the available computing resources in the cluster. To date, the parallelization of the Lattice Boltzmann method only makes use of static load balancing. Instead, it is possible to make use of dynamic load balancing, which adjusts the computation distribution as the simulation progresses. Here, we investigate the use of the LBM in conjunction with a Volume of Fluid (VOF) surface representation in a parallel environment with the aim of producing large scale scenes for the film and animation industries. The VOF method tracks mass exchange between cells of the LBM. In particular, we implement the new dynamic load balancing algorithm to improve the efficiency of the fluid simulation using this method. Fluid scenes from films and animations have two important requirements: the amount of detail and the spatial resolution of the fluid. These aspects of the VOF LBM are explored by considering the time for scene creation using a single and multi-CPU implementation of the method. The scalability of the method is studied by plotting the run time, speedup and efficiency of scene creation against the number of CPUs. From such plots, an estimate is obtained of the feasibility of creating scenes of a giving level of detail. Such estimates enable the recommendation of architectures for creation of specific scenes. Using a parallel implementation of the VOF LBM method we successfully create large scenes with great detail. In general, considering the significant amounts of communication required for the parallel method, it is shown to scale well, favouring scenes with greater detail. The scalability studies show that the new dynamic load balancing algorithm improves the efficiency of the parallel implementation, but only when using lower number of CPUs. In fact, for larger number of CPUs, the dynamic algorithm reduces the efficiency. We hypothesise the latter effect can be removed by making using of centralized load balancing decision instead of the current decentralized approach. The use of a cluster comprising of 200 CPUs is recommended for the production of large scenes of a grid size 6003 in a reasonable time frame

    A linear framework for character skinning

    Get PDF
    Character animation is the process of modelling and rendering a mobile character in a virtual world. It has numerous applications both off-line, such as virtual actors in films, and real-time, such as in games and other virtual environments. There are a number of algorithms for determining the appearance of an animated character, with different trade-offs between quality, ease of control, and computational cost. We introduce a new method, animation space, which provides a good balance between the ease-of-use of very simple schemes and the quality of more complex schemes, together with excellent performance. It can also be integrated into a range of existing computer graphics algorithms. Animation space is described by a simple and elegant linear equation. Apart from making it fast and easy to implement, linearity facilitates mathematical analysis. We derive two metrics on the space of vertices (the “animation space”), which indicate the mean and maximum distances between two points on an animated character. We demonstrate the value of these metrics by applying them to the problems of parametrisation, level-of-detail (LOD) and frustum culling. These metrics provide information about the entire range of poses of an animated character, so they are able to produce better results than considering only a single pose of the character, as is commonly done. In order to compute parametrisations, it is necessary to segment the mesh into charts. We apply an existing algorithm based on greedy merging, but use a metric better suited to the problem than the one suggested by the original authors. To combine the parametrisations with level-of-detail, we require the charts to have straight edges. We explored a heuristic approach to straightening the edges produced by the automatic algorithm, but found that manual segmentation produced better results. Animation space is nevertheless beneficial in flattening the segmented charts; we use least squares conformal maps (LSCM), with the Euclidean distance metric replaced by one of our animation-space metrics. The resulting parametrisations have significantly less overall stretch than those computed based on a single pose. Similarly, we adapt appearance preserving simplification (APS), a progressive mesh-based LOD algorithm, to apply to animated characters by replacing the Euclidean metric with an animation-space metric. When using the memoryless form of APS (in which local rather than global error is considered), the use of animation space for computations reduces the geometric errors introduced by LOD decomposition, compared to simplification based on a single pose. User tests, in which users compared video clips of the two, demonstrated a statistically significant preference for the animation-space simplifications, indicating that the visual quality is better as well. While other methods exist to take multiple poses into account, they are based on a sampling of the pose space, and the computational cost scales with the number of samples used. In contrast, our method is analytic and uses samples only to gather statistics. The quality of LOD approximations by improved further by introducing a novel approach to LOD, influence simplification, in which we remove the influences of bones on vertices, and adjust the remaining influences to approximate the original vertex as closely as possible. Once again, we use an animation-space metric to determine the approximation error. By combining influence simplification with the progressive mesh structure, we can obtain further improvements in quality: for some models and at some detail levels, the error is reduced by an order of magnitude relative to a pure progressive mesh. User tests showed that for some models this significantly improves quality, while for others it makes no significant difference. Animation space is a generalisation of skeletal subspace deformation (SSD), a popular method for real-time character animation. This means that there is a large existing base of models that can immediately benefit from the modified algorithms mentioned above. Furthermore, animation space almost entirely eliminates the well-known shortcomings of SSD (the so-called “candy-wrapper” and “collapsing elbow” effects). We show that given a set of sample poses, we can fit an animation-space model to these poses by solving a linear least-squares problem. Finally, we demonstrate that animation space is suitable for real-time rendering, by implementing it, along with level-of-detail rendering, on a PC with a commodity video card. We show that although the extra degrees of freedom make the straightforward approach infeasible for complex models, it is still possible to obtain high performance; in fact, animation space requires fewer basic operations to transform a vertex position than SSD. We also consider two methods of lighting LOD-simplified models using the original normals: tangent-space normal maps, an existing method that is fast to render but does not capture dynamic structures such as wrinkles; and tangent maps, a novel approach that encodes animation-space tangent vectors into textures, and which captures dynamic structures. We compare the methods both for performance and quality, and find that tangent-space normal maps are at least an order of magnitude faster, while user tests failed to show any perceived difference in quality between them
    corecore