3,775 research outputs found

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    Preserving attribute values on simplified meshes by re-sampling detail textures

    Get PDF
    Many sophisticated solutions have been proposed to reduce the geometric complexity of 3D meshes. A slightly less studied problem is how to preserve attribute detail on simplified meshes (e.g., color, high-frequency shape details, scalar fields, etc.).We present a general approach that is completely independent of the simplification technique adopted to reduce the mesh size. We use resampled textures (rgb, bump, displacement or shade maps) to decouple attribute detail representation from geometry simplification. The original contribution is that preservation is performed after simplification by building a set of triangular texture patches that are then packed into a single texture map. This general solution can be applied to the output of any topology-preserving simplification code and it allows any attribute value defined on the high-resolution mesh to be recovered. Moreover, decoupling shape simplification from detail preservation (and encoding the latter with texture maps) leads to high simplification rates and highly efficient rendering. We also describe an alternative application: the conversion of 3D models with 3D procedural textures (which generally force the use of software renderers) into standard 3D models with 2D bitmap textures

    Regular Grids: An Irregular Approach to the 3D Modelling Pipeline

    Get PDF
    The 3D modelling pipeline covers the process by which a physical object is scanned to create a set of points that lay on its surface. These data are then cleaned to remove outliers or noise, and the points are reconstructed into a digital representation of the original object. The aim of this thesis is to present novel grid-based methods and provide several case studies of areas in the 3D modelling pipeline in which they may be effectively put to use. The first is a demonstration of how using a grid can allow a significant reduction in memory required to perform the reconstruction. The second is the detection of surface features (ridges, peaks, troughs, etc.) during the surface reconstruction process. The third contribution is the alignment of two meshes with zero prior knowledge. This is particularly suited to aligning two related, but not identical, models. The final contribution is the comparison of two similar meshes with support for both qualitative and quantitative outputs

    Creating 3D models of cultural heritage sites with terrestrial laser scanning and 3D imaging

    Get PDF
    Includes abstract.Includes bibliographical references.The advent of terrestrial laser-scanners made the digital preservation of cultural heritage sites an affordable technique to produce accurate and detailed 3D-computermodel representations for any kind of 3D-objects, such as buildings, infrastructure, and even entire landscapes. However, one of the key issues with this technique is the large amount of recorded points; a problem which was even more intensified by the recent advances in laser-scanning technology, which increased the data acquisition rate from 25 thousand to 1 million points per second. The following research presents a workflow for the processing of large-volume laser-scanning data, with a special focus on the needs of the Zamani initiative. The research project, based at the University of Cape Town, spatially documents African Cultural Heritage sites and Landscapes and produces meshed 3D models, of various, historically important objects, such as fortresses, mosques, churches, castles, palaces, rock art shelters, statues, stelae and even landscapes

    A configurable geometry processing system

    Get PDF
    Design and development of Meshpipe, a simple tool designed to speed up the process of mesh processing pipeline development. It includes a Python API for rapid prototyping as well as a 3D viewer environment to easily inspect and interact with the processed meshes

    3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models

    Get PDF
    3D face reconstruction and facial expression analytics using 3D facial data are new and hot research topics in computer graphics and computer vision. In this proposal, we first review the background knowledge for emotion analytics using 3D morphable face model, including geometry feature-based methods, statistic model-based methods and more advanced deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction solution that robustly and accurately acquires 3D face models from a couple of images captured by a single smartphone camera. Two selfie photos of a subject taken from the front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an iterative detail updating method is applied to the initial generated 3D face to reconstruct facial details through optimizing lighting parameters and local depths. Our iterative 3D face reconstruction method permits fully automatic registration of a part-based face representation to the acquired face data and the detailed 2D/3D features to build a high-quality 3D face model. The NMF part-based face representation learned from a 3D face database facilitates effective global and adaptive local detail data fitting alternatively. Our system is flexible and it allows users to conduct the capture in any uncontrolled environment. We demonstrate the capability of our method by allowing users to capture and reconstruct their 3D faces by themselves. Based on the 3D face model reconstruction, we can analyze the facial expression and the related emotion in 3D space. We present a novel approach to analyze the facial expressions from images and a quantitative information visualization scheme for exploring this type of visual data. From the reconstructed result using NMF part-based morphable 3D face model, basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs) are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detects expressions robustly from various head poses, face sizes and lighting conditions, and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate effectiveness and accuracy of our method. In order to improve the expression recognition accuracy, we present a facial expression recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual analytics guided 3DMCNN design and optimization scheme. The geometric properties of the surface is computed using the 3D face model of a subject with facial expressions. Instead of using regular Convolutional Neural Network (CNN) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present an interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications

    Vertex classification for non-uniform geometry reduction.

    Get PDF
    Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system. (Abstract shortened by UMI.)

    Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Get PDF
    To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects

    Computational modelling of the selective laser sintering process

    Get PDF
    Dissertação de mestrado integrado em Engenharia de PolímerosA Manufatura Aditiva (AM do inglês Additive Manufacturing) tem ganho popularidade em várias indústrias importantes e exigentes devido à sua capacidade de produzir peças com geometrias complexas e pouco desperdício. Como uma das suas mais populares técnicas, a Sinterização Seletiva a Laser (SLS do inglês Selective Laser Sintering) é muito procurada por diversas indústrias que pretendem substituir processos convencionais mais dispendiosos. No entanto, o processo de SLS é intrinsecamente complexo devido aos vários fenómenos multi-fisícos e são necessários mais estudos para obter uma melhor perceção dos mesmos. Isto tem originado um elevado interesse académico em otimizar o processo para que este cumpra os requisitos industriais. Grande parte destas otimizações são feitas através de métodos experimentais que são demorados, dispendiosos e nem sempre resultam nas configurações ótimas. Este enquadramento tem motivado investigadores a recorrer à modelação computacional com o objetivo de entender melhor o processo, de modo a antecipar e corrigir defeitos. O objetivo principal do presente trabalho foi desenvolver um modelo capaz de simular o processo de SLS para aplicações poliméricas, em código de distribuição livre, à escala do tamanho da partícula. Como são necessárias abordagens distintas para simular com precisão cada etapa do processo, diferentes métodos numéricos foram aplicados para desenvolver uma ferramenta capaz de estudar o impacto, numa secção representativa da cama de pó, dos parâmetros físicos que podem ser ajustados no processo. O trabalho desenvolvido integrou diversas etapas, começando por um estudo extenso dos aspetos teóricos do processo de SLS que visou a familiarização com os fenómenos envolvidos, o desenrolar do processo, os seus parâmetros e respetivas influências, assim como a avaliação das limitações e desafios existentes. Este estudo foi seguido por uma análise detalhada dos modelos mais populares empregues para representar os principais fenómenos associados ao processo e do nível de precisão das abordagens, com base nas simplificações consideradas. Um conjunto de ferramentas computacionais foi posteriormente apresentado e os seus respetivos modelos selecionados, quando possível, de acordo com a revisão bibliográfica efetuada. Por último, vários testes foram executados, visando uma validação experimental qualitativa dos códigos utilzados, para garantir que o modelo em uso era adequado para simular o processo, permitindo o estudo e observação da influência dos principais parâmetros do processo e a evolução da sinterização. Os desenvolvimentos obtidos representam um avanço significativo para a simulação do processo de SLS. Com o uso de software open-source (LIGGGHTS e OpenFOAM), vários estudos foram feitos numa geometria realista e, apesar da ausência de dados experimentais suficientes e mais detalhados, os resultados da simulação mostraram-se bem correlacionados com os usados para comparação. Em suma, o trabalho efetuado permitiu concluir que a ferramenta desenvolvida apresenta um elevado potencial para estudar, com detalhe, o processo de SLS e a influência dos seus parâmetros e, deste modo, contribuir para a sua otimização.Additive Manufacturing (AM) has increased in popularity in numerous important and demanding industries due to the capability of manufacturing parts with complex geometries with little wastage. As one of its most popular techniques, Selective Laser Sintering (SLS) is sought after by several industries that aim to replace conventional and more expensive processes. However, the SLS process is intrinsically complex due to the various underlying multi-physics phenomena and more studies are needed to obtain more insights about it. These has resulted in many academical interests to optimize the process and allow it to achieve industrial standards. Most of these optimization attempts are performed through experimental methods that are time consuming, expensive and do not always provide the optimal configurations. This has lead researchers to resort to computational modelling, aiming at better understanding the process to anticipate and fix the defects. The main objective of the present work was to develop a model capable of simulating the SLS process for polymeric applications, within an open-source framework, at particle length scale. Since distinct approaches are required for accurately simulating each step of the SLS process, different numerical methods were employed to develop a tool capable of studying the impact, in a representative section of the powder bed, of the physical parameters that can be adjusted in the process. The developed work comprised several steps, starting with an extensive study of the theoretical aspects of the SLS process that aimed at the acquaintance with the involved phenomena, process unwind, its parameters and their influence, as well as evaluating the existing limitations and challenges. This study was then followed by a detailed analysis of the most common employed models to represent the major phenomena and of the accuracy level of the approaches, based on the employed simplifications. A set of computational tools was then exhibited and their built in models were selected, when possible, according to the precedent literature review. Lastly, various tests were carried to obtain an experimental qualitative validation of the used code, to assure that the used model was adequate to simulate the process, allowing the study and observation of the principal parameter influence and sintering progression. The achieved developments represent a significant advance towards the SLS process simulation. With the use of open-source software (LIGGGHTS e OpenFOAM), several studies were performed on a realistic geometry and, despite the absence of enough and more detailed experimental data, the simulation results are in agreement with the ones used for comparison. Overall, the accomplished work allowed to conclude that the developed tool constitutes a great potential to study, in detail, the SLS process and its parameters influence and, therefore, contribute to its optimization.Financially, I acknowledge the support of National Funds through FCT - Portuguese Foundation for Science and Technology, Reference UID/CTM/50025/2019 and UIDB/04436/2020, and project SIFA - Sistema Inteligente de Fabricação Aditiva (POCI 01-0247-FEDER-047108). I also acknowledge the computing facilities support by Search-ON2: Revitalization of HPC Infrastructure of UMinho (project no. NORTE-07-0162-FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 -- O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF)
    • …
    corecore