189 research outputs found

    Non-Uniform Planar Slicing for Robot-Based Additive Manufacturing

    Get PDF
    Planar slicing algorithms with constant layer thickness are widely implemented for geometry processing in Additive Manufacturing (AM). Since the build direction is fixed, a staircase effect is produced, decreasing the final surface finish. Also, support structures are required for overhanging portions. To overcome such limits, AM is combined with manipulators and working tables with multiple degrees of freedom. This is called Robot-Based Additive Manufacturing (RBAM) and it aims to increase the manufacturing flexibility of traditional printers, enabling the deposition of material in multiple directions. In particular, the deposition direction is changed at each layer requiring non-uniform thickness slicing. The total number of layers, as well as the volume of the support structures and the manufacturing time are reduced, while the surface finish and mechanical performance of the final product are increased. This paper presents an algorithm for non-uniform planar slicing developed in Rhinoceros and Grasshopper. It processes the input geometry and uses parameters to capture manufacturing limits. It mostly targets curved geometries to remove the need for support structures, also increasing the part quality

    A review of geometry representation and processing methods for cartesian and multiaxial robot-based additive manufacturing

    Get PDF
    Nowadays, robot-based additive manufacturing (RBAM) is emerging as a potential solution to increase manufacturing flexibility. Such technology allows to change the orientation of the material deposition unit during printing, making it possible to fabricate complex parts with optimized material distribution. In this context, the representation of parts geometries and their subsequent processing become aspects of primary importance. In particular, part orientation, multiaxial deposition, slicing, and infill strategies must be properly evaluated so as to obtain satisfactory outputs and avoid printing failures. Some advanced features can be found in commercial slicing software (e.g., adaptive slicing, advanced path strategies, and non-planar slicing), although the procedure may result excessively constrained due to the limited number of available options. Several approaches and algorithms have been proposed for each phase and their combination must be determined accurately to achieve the best results. This paper reviews the state-of-the-art works addressing the primary methods for the representation of geometries and the subsequent geometry processing for RBAM. For each category, tools and software found in the literature and commercially available are discussed. Comparison tables are then reported to assist in the selection of the most appropriate approaches. The presented review can be helpful for designers, researchers and practitioners to identify possible future directions and open issues

    A practical comparison between two powerful PCC codec’s

    Get PDF
    Recent advances in the consumption of 3D content creates the necessity of efficient ways to visualize and transmit 3D content. As a result, methods to obtain that same content have been evolving, leading to the development of new methods of representations, namely point clouds and light fields. A point cloud represents a set of points with associated Cartesian coordinates associated with each point(x, y, z), as well as being able to contain even more information inside that point (color, material, texture, etc). This kind of representation changes the way on how 3D content in consumed, having a wide range of applications, from videogaming to medical ones. However, since this type of data carries so much information within itself, they are data-heavy, making the storage and transmission of content a daunting task. To resolve this issue, MPEG created a point cloud coding normalization project, giving birth to V-PCC (Video-based Point Cloud Coding) and G-PCC (Geometry-based Point Cloud Coding) for static content. Firstly, a general analysis of point clouds is made, spanning from their possible solutions, to their acquisition. Secondly, point cloud codecs are studied, namely VPCC and G-PCC from MPEG. Then, a state of art study of quality evaluation is performed, namely subjective and objective evaluation. Finally, a report on the JPEG Pleno Point Cloud, in which an active colaboration took place, is made, with the comparative results of the two codecs and used metrics.Os avanços recentes no consumo de conteúdo 3D vêm criar a necessidade de maneiras eficientes de visualizar e transmitir conteúdo 3D. Consequentemente, os métodos de obtenção desse mesmo conteúdo têm vindo a evoluir, levando ao desenvolvimento de novas maneiras de representação, nomeadamente point clouds e lightfields. Um point cloud (núvem de pontos) representa um conjunto de pontos com coordenadas cartesianas associadas a cada ponto (x, y, z), além de poder conter mais informação dentro do mesmo (cor, material, textura, etc). Este tipo de representação abre uma nova janela na maneira como se consome conteúdo 3D, tendo um elevado leque de aplicações, desde videojogos e realidade virtual a aplicações médicas. No entanto, este tipo de dados, ao carregarem com eles tanta informação, tornam-se incrivelmente pesados, tornando o seu armazenamento e transmissão uma tarefa hercúleana. Tendo isto em mente, a MPEG criou um projecto de normalização de codificação de point clouds, dando origem ao V-PCC (Video-based Point Cloud Coding) e G-PCC (Geometry-based Point Cloud Coding) para conteúdo estático. Esta dissertação tem como objectivo uma análise geral sobre os point clouds, indo desde as suas possívei utilizações à sua aquisição. Seguidamente, é efectuado um estudo dos codificadores de point clouds, nomeadamente o V-PCC e o G-PCC da MPEG, o estado da arte da avaliação de qualidade, objectiva e subjectiva, e finalmente, são reportadas as actividades da JPEG Pleno Point Cloud, na qual se teve uma colaboração activa

    Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method

    Get PDF
    This thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: ''How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?'' We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios

    Composing quadrilateral meshes for animation

    Get PDF
    The modeling-by-composition paradigm can be a powerful tool in modern animation pipelines. We propose two novel interactive techniques to compose 3D assets that enable the artists to freely remove, detach and combine components of organic models. The idea behind our methods is to preserve most of the original information in the input characters and blend accordingly where necessary. The first method, QuadMixer, provides a robust tool to compose the quad layouts of watertight pure quadrilateral meshes, exploiting the boolean operations defined on triangles. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving the shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. The resulting meshes preserve the originally designed edge flows that, by construction, are captured and incorporated into the new quads. SkinMixer extends this approach to compose skinned models, taking into account not only the surface but also the data structures for animating the character. We propose a new operation-based technique that preserves and smoothly merges meshes, skeletons, and skinning weights. The retopology approach of QuadMixer is extended to work on quad-dominant and arbitrary complex surfaces. Instead of relying on boolean operations on triangle meshes, we manipulate signed distance fields to generate an implicit surface. The results preserve most of the information in the input assets, blending accordingly in the intersection regions. The resulting characters are ready to be used in animation pipelines. Given the high quality of the results generated, we believe that our methods could have a huge impact on the entertainment industry. Integrated into current software for 3D modeling, they would certainly provide a powerful tool for the artists. Allowing them to automatically reuse parts of their well-designed characters could lead to a new approach for creating models, which would significantly reduce the cost of the process

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    Adaptive Layout for Interactive Documents

    Get PDF
    This thesis presents a novel approach to create automated layouts for rich illustrative material that could adapt according to the screen size and contextual requirements. The adaption not only considers global layout but also deals with the content and layout adaptation of individual illustrations in the layout. An unique solution has been developed that integrates constraint-based and force-directed techniques to create adaptive grid-based and non-grid layouts. A set of annotation layouts are developed which adapt the annotated illustrations to match the contextual requirements over time

    Characterization of Porosity Defects in Selectively Laser Melted IN718 and Ti- 6A1-4V via Synchrotron X-Ray Computed Tomography

    Get PDF
    Additive manufacturing (AM) is a method of fabrication involving the joining of feedstock material together to form a structure. Additive manufacturing has been developed for use with polymers, ceramics, composites, biomaterials, and metals. Of the metal additive manufacturing techniques, one of the most commonly employed for commercial and government applications is selective laser melting (SLM). SLM operates by using a high-powered laser to melt feedstock metal powder, layer by layer, until the desired near-net shape is completed. Due to the inherent function of AM and particularly SLM, it holds much promise in the ability to design parts without geometrical constraint, cost-effectively manufacture them, and reduce material waste. Because of this, SLM has gained traction in the aerospace, automotive, and medical device industries, which often use uniquely shaped parts for specific functions. These industries also have a tendency to use high performance metallic alloys that can withstand the sometimes-extreme operating conditions that the parts experience. Two alloys that are often used in these parts are Inconel 718 (IN718) and Ti-6Al-4V (Ti64). Both of these materials have been routinely used in SLM processing but have been often marked by porosity defects in the as-built state. Since large amounts of porosity is known to limit material mechanical performance, especially in fatigue life, there is a general need to inspect and quantify this material characteristic before part use in these industries. One of the most advanced porosity inspection methods is via X-ray computed tomography (CT). CT uses a detector to capture scattered X-rays after passing through the part. The detector images are then reconstructed to create a tomograph that can be analyzed using image processing techniques to visualize and quantify porosity. In this research, CT was performed on both materials at a 30 μm “low resolution” (LR) for different build orientations and processing conditions. Furthermore, a synchrotron beamline was used to conduct CT on small samples of the SLM IN718 and Ti64 specimens at a 0.65 μm “high resolution” (HR), which to the author’s knowledge is the highest resolution (for SLM IN718) and matches the highest resolution (for SLM Ti64) reported for porosity CT investigations of these materials. Tomographs were reconstructed using TomoPy 1.0.0, processed using ImageJ and Avizo 9.0.2, and quantified in Avizo and Matlab. Results showed a relatively low amount of porosity in the materials overall, but a several order of magnitude increase in quantifiable porosity volume fraction from LR to HR observations. Furthermore, quantifications and visualizations showed a propensity for more and larger pores to be present near the free surfaces of the specimens. Additionally, a plurality of pores in the HR samples were found to be in close proximity (10 μm or less) to each other

    Finite element simulation of additive manufacturing with enhanced accuracy

    Get PDF
    Tesi en modalitat de compendi de publicacionsThis thesis develops numerical methods to improve the accuracy and computational efficiency of the part-scale simulation of Additive Manufacturing (AM) (or 3D printing) metal processes. AM is characterized by multiple scales in space and time, as well as multiple complex physics that occur in three-dimensional growing-in-time geometries, making its simulation a remarkable computational challenge. To this end, the computational framework is built by addressing four key topics: (1) a Finite Element technology with enhanced stress/strain accuracy including the incompressible limit; (2) an Adaptive Mesh Refinement (AMR) strategy accounting for geometric and solution accuracies; (3) a coarsening correction strategy to avoid loss of information in the coarsening AMR procedure, and (4) a GCodebased simulation tool that uses the exact geometric and process parameters data provided to the actual AM machinery. In this context, the mixed displacement/deviatoric-strain/pressure u/e/p FE formulation in (1) is adopted to solve incompressible problems resulting from the isochoric plastic flow in the Von Mises criterion typical of metals. The enhanced stress/strain accuracy of the u/e/p over the standard and u/p FE formulations is verified in a set of numerical benchmarks in iso-thermal and non-isothermal conditions. A multi-criteria AMR strategy in (2) is used to improve computational efficiency while keeping the number of FEs controlled and without the strictness of imposing the commonly adopted 2:1 balance scheme. Avoiding this enables to use high jumps on the refinement level between adjacent FEs; this improves the mesh resolution on the region of interest and keeps the mesh coarse elsewhere. Moving the FE solution from a fine mesh to a coarse mesh introduces loss of information. To prevent this, a coarsening correction strategy presented in (3) restores the fine solution in the coarse mesh, providing computational cost reduction and keeping the accuracy of the fine mesh solution accuracy. Lastly, design flexibility is one of the main advantages of AM over traditional manufacturing processes. This flexibility is observed in the design of complex components and the possibility to change the process parameters, i.e. power input, speed, waiting pauses, among others, throughout the process. In (4) a GCode-based simulation tool that replicates the exact path travelled and process parameters delivered to the AM machiney is developed. Furthermore, the GCode-based tool together with the AMR strategy allows to automatically generate an embedded fitted cartesian FE mesh for the evolving domain and removes the challenging task of mesh manipulation by the end-user. The FE framework is built on a high-performance computing environment. This framework enables to accelerate the process-to-performance understanding and to minimize the number of trial-and-error experiments, two key aspects to exploit the technology in the industrial environment.Esta tesis tiene como objetivo desarrollar métodos numéricos para mejorar la precisión y eficiencia computacionales en simulaciones de piezas fabricadas mediante Manufactura Aditiva (MA), también conocida como Impresión 3D. La manufactura aditiva es un problema complejo que involucra múltiples fenómenos físicos, que se desarolla en múltiples escalas, y cuya geometría evoluciona en el tiempo. Para tal fin, se plantean cuatro objetivos: (1) Desarrollo de una tecnología de elementos finitos para capturar con mayor precisión tanto tensiones como deformaciones en casos en el que el material tiene comportamiento isocórico; (2) Una estrategia de adaptividad de malla (AMR), que busca modificar la malla teniendo en cuenta la geometría y los errores en la solución numérica; (3) Una estrategia para minimizar la aproximación numérica durante el engrosamiento (coarsening) de la malla, crucial en la reducción de tiempos de cómputo en casos de piezas de grandes dimensiones; y (4) Un marco de simulación basado en la lectura de ficheros GCode, ampliamente usado por maquinaria de impresión en procesos de manufactura aditiva, un formato que no sólo proporciona los datos asociados a la geometría, sino también los parámetros de proceso. Con respecto a (1), esta tesis propone el uso de una formulación mixta en desplazamientos /deformación-desviadora / presión (u/e/p), para simular la deposición de materiales con deformación inelástica isocórica, como ocurre en los metales. En cuanto a la medición de la precisión en el cálculo de las tensiones y las deformaciones, en esta tesis se realiza un amplio número de experimentos tanto en condiciones isotérmicas como no isotérmicas para establecer una comparativa entre las dos formulaciones mixtas, u/e/p y u/p. Con respecto a (2), para mejorar la eficiencia computacional manteniendo acotado el número total de elementos finitos, se desarrolla una novedosa estrategia multicriterio de refinamiento adaptativo. Esta estrategia no se restringe a mallas con balance 2:1, permitiendo así tener saltos de nivel mayores entre elementos adyacentes. Por otra parte, para evitar la pérdida de información al proyectar la solución a mallas más gruesas, se plantee una corrección en (3), que tiene como objetivo recuperar la solución de la malla fina, garantizando así que la malla gruesa conserve la precisión obtenida en la malla fina. El proceso de manufactura aditiva se distingue por su gran flexibilidad comparándolo con otros métodos tradicionales de manufactura. Esta flexibilidad se observa en la posibilidad de construir piezas de gran complejidad geométrica, optimizando propiedades mecánicas durante el proceso de deposición. Por ese motivo, (4) se propone la lectura de ficheros en formato GCode que replica la ruta exacta del recorrido del láser que realiza la deposición del material. Los ingredientes lectura de comandos escritos en lenguaje Gcode, multicriterio de adaptividad de malla y el uso de mallas estructuradas basadas en octrees, permiten capturar con gran precisión el dominio discreto eliminando así la engorrosa tarea de generar un dominio discreto ad-hoc para la pieza a modelar. Los desarrollos de esta tesis se realizan en un entorno de computación de altas prestaciones (HPC) que permite acelerar el estudio de la ejecución del proceso de impresión y por ende reducir el número de experimentos destructivos, dos aspectos clave que permiten explorar y desarrollar nuevas técnicas en manufactura aditiva de piezas industriales.Postprint (published version

    Construction and commissioning of a technological prototype of a high-granularity semi-digital hadronic calorimeter

    Get PDF
    A large prototype of 1.3m3 was designed and built as a demonstrator of the semi-digital hadronic calorimeter (SDHCAL) concept proposed for the future ILC experiments. The prototype is a sampling hadronic calorimeter of 48 units. Each unit is built of an active layer made of 1m2 Glass Resistive Plate Chamber(GRPC) detector placed inside a cassette whose walls are made of stainless steel. The cassette contains also the electronics used to read out the GRPC detector. The lateral granularity of the active layer is provided by the electronics pick-up pads of 1cm2 each. The cassettes are inserted into a self-supporting mechanical structure built also of stainless steel plates which, with the cassettes walls, play the role of the absorber. The prototype was designed to be very compact and important efforts were made to minimize the number of services cables to optimize the efficiency of the Particle Flow Algorithm techniques to be used in the future ILC experiments. The different components of the SDHCAL prototype were studied individually and strict criteria were applied for the final selection of these components. Basic calibration procedures were performed after the prototype assembling. The prototype is the first of a series of new-generation detectors equipped with a power-pulsing mode intended to reduce the power consumption of this highly granular detector. A dedicated acquisition system was developed to deal with the output of more than 440000 electronics channels in both trigger and triggerless modes. After its completion in 2011, the prototype was commissioned using cosmic rays and particles beams at CERN.Comment: 49 pages, 41 figure
    corecore