306 research outputs found

    Pix2Repair: Implicit Shape Restoration from Images

    Full text link
    We present Pix2Repair, an automated shape repair approach that generates restoration shapes from images to repair fractured objects. Prior repair approaches require a high-resolution watertight 3D mesh of the fractured object as input. Input 3D meshes must be obtained using expensive 3D scanners, and scanned meshes require manual cleanup, limiting accessibility and scalability. Pix2Repair takes an image of the fractured object as input and automatically generates a 3D printable restoration shape. We contribute a novel shape function that deconstructs a latent code representing the fractured object into a complete shape and a break surface. We show restorations for synthetic fractures from the Geometric Breaks and Breaking Bad datasets, and cultural heritage objects from the QP dataset, and for real fractures from the Fantastic Breaks dataset. We overcome challenges in restoring axially symmetric objects by predicting view-centered restorations. Our approach outperforms shape completion approaches adapted for shape repair in terms of chamfer distance, earth mover's distance, normal consistency, and percent restorations generated

    AI-based design methodologies for hot form quench (HFQ®)

    Get PDF
    This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits. To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality. The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces

    GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided Distance Representation

    Full text link
    We present a learning-based method, namely GeoUDF,to tackle the long-standing and challenging problem of reconstructing a discrete surface from a sparse point cloud.To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation that explicitly formulates the unsigned distance of a query point as the learnable affine averaging of its distances to the tangent planes of neighboring points on the surface. Besides,we model the local geometric structure of the input point clouds by explicitly learning a quadratic polynomial for each point. This not only facilitates upsampling the input sparse point cloud but also naturally induces unoriented normal, which further augments UDF estimation. Finally, to extract triangle meshes from the predicted UDF we propose a customized edge-based marching cube module. We conduct extensive experiments and ablation studies to demonstrate the significant advantages of our method over state-of-the-art methods in terms of reconstruction accuracy, efficiency, and generality. The source code is publicly available at https://github.com/rsy6318/GeoUDF

    3D Representation Learning for Shape Reconstruction and Understanding

    Get PDF
    The real world we are living in is inherently composed of multiple 3D objects. However, most of the existing works in computer vision traditionally either focus on images or videos where the 3D information inevitably gets lost due to the camera projection. Traditional methods typically rely on hand-crafted algorithms and features with many constraints and geometric priors to understand the real world. However, following the trend of deep learning, there has been an exponential growth in the number of research works based on deep neural networks to learn 3D representations for complex shapes and scenes, which lead to many cutting-edged applications in augmented reality (AR), virtual reality (VR) and robotics as one of the most important directions for computer vision and computer graphics. This thesis aims to build an intelligent system with dynamic 3D representations that can change over time to understand and recover the real world with semantic, instance and geometric information and eventually bridge the gap between the real world and the digital world. As the first step towards the challenges, this thesis explores both explicit representations and implicit representations by explicitly addressing the existing open problems in these areas. This thesis starts from neural implicit representation learning on 3D scene representation learning and understanding and moves to a parametric model based explicit 3D reconstruction method. Extensive experimentation over various benchmarks on various domains demonstrates the superiority of our method against previous state-of-the-art approaches, enabling many applications in the real world. Based on the proposed methods and current observations of open problems, this thesis finally presents a comprehensive conclusion with potential future research directions

    Spectral methods for solving elliptic PDEs on unknown manifolds

    Full text link
    In this paper, we propose a mesh-free numerical method for solving elliptic PDEs on unknown manifolds, identified with randomly sampled point cloud data. The PDE solver is formulated as a spectral method where the test function space is the span of the leading eigenfunctions of the Laplacian operator, which are approximated from the point cloud data. While the framework is flexible for any test functional space, we will consider the eigensolutions of a weighted Laplacian obtained from a symmetric Radial Basis Function (RBF) method induced by a weak approximation of a weighted Laplacian on an appropriate Hilbert space. Especially, we consider a test function space that encodes the geometry of the data yet does not require us to identify and use the sampling density of the point cloud. To attain a more accurate approximation of the expansion coefficients, we adopt a second-order tangent space estimation method to improve the RBF interpolation accuracy in estimating the tangential derivatives. This spectral framework allows us to efficiently solve the PDE many times subjected to different parameters, which reduces the computational cost in the related inverse problem applications. In a well-posed elliptic PDE setting with randomly sampled point cloud data, we provide a theoretical analysis to demonstrate the convergent of the proposed solver as the sample size increases. We also report some numerical studies that show the convergence of the spectral solver on simple manifolds and unknown, rough surfaces. Our numerical results suggest that the proposed method is more accurate than a graph Laplacian-based solver on smooth manifolds. On rough manifolds, these two approaches are comparable. Due to the flexibility of the framework, we empirically found improved accuracies in both smoothed and unsmoothed Stanford bunny domains by blending the graph Laplacian eigensolutions and RBF interpolator.Comment: 8 figure

    Discrete slip plane modeling of heterogeneous microplasticity:Formulation and integration with experiments

    Get PDF

    Vitruvio: 3D Building Meshes via Single Perspective Sketches

    Full text link
    Today's architectural engineering and construction (AEC) software require a learning curve to generate a three-dimension building representation. This limits the ability to quickly validate the volumetric implications of an initial design idea communicated via a single sketch. Allowing designers to translate a single sketch to a 3D building will enable owners to instantly visualize 3D project information without the cognitive load required. If previous state-of-the-art (SOTA) data-driven methods for single view reconstruction (SVR) showed outstanding results in the reconstruction process from a single image or sketch, they lacked specific applications, analysis, and experiments in the AEC. Therefore, this research addresses this gap, introducing the first deep learning method focused only on buildings that aim to convert a single sketch to a 3D building mesh: Vitruvio. Vitruvio adapts Occupancy Network for SVR tasks on a specific building dataset (Manhattan 1K). This adaptation brings two main improvements. First, it accelerates the inference process by more than 26% (from 0.5s to 0.37s). Second, it increases the reconstruction accuracy (measured by the Chamfer Distance) by 18%. During this adaptation in the AEC domain, we evaluate the effect of the building orientation in the learning procedure since it constitutes an important design factor. While aligning all the buildings to a canonical pose improved the overall quantitative metrics, it did not capture fine-grain details in more complex building shapes (as shown in our qualitative analysis). Finally, Vitruvio outputs a 3D-printable building mesh with arbitrary topology and genus from a single perspective sketch, providing a step forward to allow owners and designers to communicate 3D information via a 2D, effective, intuitive, and universal communication medium: the sketch

    Rapid model-guided design of organ-scale synthetic vasculature for biomanufacturing

    Full text link
    Our ability to produce human-scale bio-manufactured organs is critically limited by the need for vascularization and perfusion. For tissues of variable size and shape, including arbitrarily complex geometries, designing and printing vasculature capable of adequate perfusion has posed a major hurdle. Here, we introduce a model-driven design pipeline combining accelerated optimization methods for fast synthetic vascular tree generation and computational hemodynamics models. We demonstrate rapid generation, simulation, and 3D printing of synthetic vasculature in complex geometries, from small tissue constructs to organ scale networks. We introduce key algorithmic advances that all together accelerate synthetic vascular generation by more than 230-fold compared to standard methods and enable their use in arbitrarily complex shapes through localized implicit functions. Furthermore, we provide techniques for joining vascular trees into watertight networks suitable for hemodynamic CFD and 3D fabrication. We demonstrate that organ-scale vascular network models can be generated in silico within minutes and can be used to perfuse engineered and anatomic models including a bioreactor, annulus, bi-ventricular heart, and gyrus. We further show that this flexible pipeline can be applied to two common modes of bioprinting with free-form reversible embedding of suspended hydrogels and writing into soft matter. Our synthetic vascular tree generation pipeline enables rapid, scalable vascular model generation and fluid analysis for bio-manufactured tissues necessary for future scale up and production.Comment: 58 pages (19 main and 39 supplement pages), 4 main figures, 9 supplement figure

    Autonomous 3D Urban and Complex Terrain Geometry Generation and Micro-Climate Modelling Using CFD and Deep Learning

    Get PDF
    Sustainable building design requires a clear understanding and realistic modelling of the complex interaction between climate and built environment to create safe and comfortable outdoor and indoor spaces. This necessitates unprecedented urban climate modelling at high temporal and spatial resolution. The interaction between complex urban geometries and the microclimate is characterized by complex transport mechanisms. The challenge to generate geometric and physics boundary conditions in an automated manner is hindering the progress of computational methods in urban design. Thus, the challenge of modelling realistic and pragmatic numerical urban micro-climate for wind engineering, environmental, and building energy simulation applications should address the complexity of the geometry and the variability of surface types involved in urban exposures. The original contribution to knowledge in this research is the proposed an end-to-end workflow that employs a cutting-edge deep learning model for image segmentation to generate building footprint polygons autonomously and combining those polygons with LiDAR data to generate level of detail three (LOD3) 3D building models to tackle the geometry modelling issue in climate modelling and solar power potential assessment. Urban and topography geometric modelling is a challenging task when undertaking climate model assessment. This paper describes a deep learning technique that is based on U-Net architecture to automate 3D building model generation by combining satellite imagery with LiDAR data. The deep learning model used registered a mean squared error of 0.02. The extracted building polygons were extruded using height information from corresponding LiDAR data. The building roof structures were also modelled from the same point cloud data. The method used has the potential to automate the task of generating urban scale 3D building models and can be used for city-wide applications. The advantage of applying a deep learning model in an image processing task is that it can be applied to a new set of input image data to extract building footprint polygons for autonomous application once it has been trained. In addition, the model can be improved over time with minimum adjustments when an improved quality dataset is available, and the trained parameters can be improved further building on previously learned features. Application examples for pedestrian level wind and solar energy availability assessment as well as modeling wind flow over complex terrain are presented

    Visual-Guided Mesh Repair

    Full text link
    Mesh repair is a long-standing challenge in computer graphics and related fields. Converting defective meshes into watertight manifold meshes can greatly benefit downstream applications such as geometric processing, simulation, fabrication, learning, and synthesis. In this work, we first introduce three visual measures for visibility, orientation, and openness, based on ray-tracing. We then present a novel mesh repair framework that incorporates visual measures with several critical steps, i.e., open surface closing, face reorientation, and global optimization, to effectively repair defective meshes, including gaps, holes, self-intersections, degenerate elements, and inconsistent orientations. Our method reduces unnecessary mesh complexity without compromising geometric accuracy or visual quality while preserving input attributes such as UV coordinates for rendering. We evaluate our approach on hundreds of models randomly selected from ShapeNet and Thingi10K, demonstrating its effectiveness and robustness compared to existing approaches
    • …
    corecore