28 research outputs found

    Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise to Noise Mapping

    Full text link
    Learning signed distance functions (SDFs) from 3D point clouds is an important task in 3D computer vision. However, without ground truth signed distances, point normals or clean point clouds, current methods still struggle from learning SDFs from noisy point clouds. To overcome this challenge, we propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision for training. Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy point cloud observations. Our novel learning manner is supported by modern Lidar systems which capture multiple noisy observations per second. We achieve this by a novel loss which enables statistical reasoning on point clouds and maintains geometric consistency although point clouds are irregular, unordered and have no point correspondence among noisy observations. Our evaluation under the widely used benchmarks demonstrates our superiority over the state-of-the-art methods in surface reconstruction, point cloud denoising and upsampling. Our code, data, and pre-trained models are available at https://github.com/mabaorui/Noise2NoiseMapping/Comment: To appear at ICML2023. Code and data are available at https://github.com/mabaorui/Noise2NoiseMapping

    Learning a More Continuous Zero Level Set in Unsigned Distance Fields through Level Set Projection

    Full text link
    Latest methods represent shapes with open surfaces using unsigned distance functions (UDFs). They train neural networks to learn UDFs and reconstruct surfaces with the gradients around the zero level set of the UDF. However, the differential networks struggle from learning the zero level set where the UDF is not differentiable, which leads to large errors on unsigned distances and gradients around the zero level set, resulting in highly fragmented and discontinuous surfaces. To resolve this problem, we propose to learn a more continuous zero level set in UDFs with level set projections. Our insight is to guide the learning of zero level set using the rest non-zero level sets via a projection procedure. Our idea is inspired from the observations that the non-zero level sets are much smoother and more continuous than the zero level set. We pull the non-zero level sets onto the zero level set with gradient constraints which align gradients over different level sets and correct unsigned distance errors on the zero level set, leading to a smoother and more continuous unsigned distance field. We conduct comprehensive experiments in surface reconstruction for point clouds, real scans or depth maps, and further explore the performance in unsupervised point cloud upsampling and unsupervised point normal estimation with the learned UDF, which demonstrate our non-trivial improvements over the state-of-the-art methods. Code is available at https://github.com/junshengzhou/LevelSetUDF .Comment: To appear at ICCV2023. Code is available at https://github.com/junshengzhou/LevelSetUD

    Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds

    Full text link
    Surface reconstruction for point clouds is an important task in 3D computer vision. Most of the latest methods resolve this problem by learning signed distance functions (SDF) from point clouds, which are limited to reconstructing shapes or scenes with closed surfaces. Some other methods tried to represent shapes or scenes with open surfaces using unsigned distance functions (UDF) which are learned from large scale ground truth unsigned distances. However, the learned UDF is hard to provide smooth distance fields near the surface due to the noncontinuous character of point clouds. In this paper, we propose a novel method to learn consistency-aware unsigned distance functions directly from raw point clouds. We achieve this by learning to move 3D queries to reach the surface with a field consistency constraint, where we also enable to progressively estimate a more accurate surface. Specifically, we train a neural network to gradually infer the relationship between 3D queries and the approximated surface by searching for the moving target of queries in a dynamic way, which results in a consistent field around the surface. Meanwhile, we introduce a polygonization algorithm to extract surfaces directly from the gradient field of the learned UDF. The experimental results in surface reconstruction for synthetic and real scan data show significant improvements over the state-of-the-art under the widely used benchmarks.Comment: Accepted by NeurIPS 2022. Project page:https://junshengzhou.github.io/CAP-UDF. Code:https://github.com/junshengzhou/CAP-UD

    Uni3D: Exploring Unified 3D Representation at Scale

    Full text link
    Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language. However, scalable representation for 3D objects and scenes is relatively unexplored. In this work, we present Uni3D, a 3D foundation model to explore the unified 3D representation at scale. Uni3D uses a 2D initialized ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features. Via the simple architecture and pretext task, Uni3D can leverage abundant 2D pretrained models as initialization and image-text aligned models as the target, unlocking the great potential of 2D models and scaling-up strategies to the 3D world. We efficiently scale up Uni3D to one billion parameters, and set new records on a broad range of 3D tasks, such as zero-shot classification, few-shot classification, open-world understanding and part segmentation. We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild. We believe that Uni3D provides a new direction for exploring both scaling up and efficiency of the representation in 3D domain.Comment: Code and Demo: https://github.com/baaivision/Uni3

    NeAF: Learning Neural Angle Fields for Point Normal Estimation

    No full text
    Normal estimation for unstructured point clouds is an important task in 3D computer vision. Current methods achieve encouraging results by mapping local patches to normal vectors or learning local surface fitting using neural networks. However, these methods are not generalized well to unseen scenarios and are sensitive to parameter settings. To resolve these issues, we propose an implicit function to learn an angle field around the normal of each point in the spherical coordinate system, which is dubbed as Neural Angle Fields (NeAF). Instead of directly predicting the normal of an input point, we predict the angle offset between the ground truth normal and a randomly sampled query normal. This strategy pushes the network to observe more diverse samples, which leads to higher prediction accuracy in a more robust manner. To predict normals from the learned angle fields at inference time, we randomly sample query vectors in a unit spherical space and take the vectors with minimal angle values as the predicted normals. To further leverage the prior learned by NeAF, we propose to refine the predicted normal vectors by minimizing the angle offsets. The experimental results with synthetic data and real scans show significant improvements over the state-of-the-art under widely used benchmarks. Project page: https://lisj575.github.io/NeAF/

    Preparation process and performances of cBN-Fe magnetic abrasive particles

    No full text
    In order to solve the problems of low hardness in the grinding phase material of magnetic abrasive particles prepared by the existing sintering method, the grinding effects on high-hardness and poor magnetic conductivity materials, such as titanium alloys, were found to be poor. Additionally, the hardest diamond material could not be used as the grinding phase to prepare magnetic abrasive particles by the sintering method. In this study, the cBN-Fe magnetic abrasive particles were prepared by sintering with Fe powder as the matrix and cBN powder as the grinding phase. The study focused on using Ti-6A1-4V (TC4) plates as the grinding objects, and the effects of sintering time, heating rate and raw material ratio on the grinding performances of cBN-Fe magnetic abrasive particles were explored using a control variable method. The aim was to determine the optimal preparation process parameters of cBN-Fe magnetic abrasive particles. Taking 45# steel and 202 stainless steel as grinding workpieces, the performance of cBN-Fe magnetic abrasive particles were compared with that of Al2O3-Fe and SiC-Fe magnetic abrasive particles, both prepared by the sintering method. The surface roughness and morphology of the workpieces before and after grinding with three kinds of magnetic abrasive particles were compared. Moreover, the grinding performance and service life of different magnetic abrasive particles were investigated. The results show that the best grinding performances for cBN-Fe magnetic abrasive particles was achieved when using a mass ratio of Fe powder to cBN powder of 3∶1, a sintering temperature of 1150 ℃, a sintering time of 6 h, a holding time of 2 h and a heating rate of 3.19 ℃/min. Furthermore, it is found that the grinding performance of cBN-Fe magnetic abrasive particles is better than that of Al2O3-Fe and SiC-Fe magnetic abrasive particles prepared by sintering. The service life of cBN-Fe magnetic abrasive particles is 1.6 times and 1.3 times longer than that of Al2O3-Fe and SiC-Fe magnetic abrasive particles, respectively

    Self-Supervised Point Cloud Representation Learning with Occlusion Auto-Encoder

    Full text link
    Learning representations for point clouds is an important task in 3D computer vision, especially without manually annotated supervision. Previous methods usually take the common aid from auto-encoders to establish the self-supervision by reconstructing the input itself. However, the existing self-reconstruction based auto-encoders merely focus on the global shapes, and ignore the hierarchical context between the local and global geometries, which is a crucial supervision for 3D representation learning. To resolve this issue, we present a novel self-supervised point cloud representation learning framework, named 3D Occlusion Auto-Encoder (3D-OAE). Our key idea is to randomly occlude some local patches of the input point cloud and establish the supervision via recovering the occluded patches using the remaining visible ones. Specifically, we design an encoder for learning the features of visible local patches, and a decoder for leveraging these features to predict the occluded patches. In contrast with previous methods, our 3D-OAE can remove a large proportion of patches and predict them only with a small number of visible patches, which enable us to significantly accelerate training and yield a nontrivial self-supervisory performance. The trained encoder can be further transferred to various downstream tasks. We demonstrate our superior performances over the state-of-the-art methods in different discriminant and generative applications under widely used benchmarks

    Pre-Synthetic Redox Gated Metal-to-Insulator Transition and Photothermoelec-tric Properties in Nickel Tetrathiafulvalene-Tetrathiolate Coordination Polymers

    No full text
    Photothermoelectric (PTE) materials are promising candidates for solar energy harvesting and photodetection applications, especially for near-infrared (NIR) wavelengths. Although the processability and tunability of organic materials is highly advantageous, examples of organic PTE materials are comparatively rare and their PTE performance is typically limited by poor photothermal (PT) conversion. Here we report the use of redox-active Sn complexes of tetrathiafulvalene-tetrathiolate (TTFtt) as transmetalating agents for the synthesis of pre-synthetically redox tuned NiTTFtt materials. Unlike the neutral material NiTTFtt, which exhibits n-type glassy-metallic conductivity, the reduced materials Li1.2Ni0.4[NiTTFtt] and [Li(THF)1.5]1.2Ni0.4[NiTTFtt] (THF = tetrahydrofuran) display physical characteristics more consistent with p-type semi-conductors. The broad spectral absorption and electrically conducting nature of these TTFtt-based materials enable highly efficient NIR-thermal conversion and good PTE performance. Furthermore, in contrast to conventional PTE composites, these NiTTFtt coordination polymers are nota-ble as single-component PTE materials. The pre-synthetically tuned metal-to-insulator transition in these NiTTFtt systems directly modulates to their PT and PTE properties
    corecore