128 research outputs found

    Photonic Aharonov–Bohm effect in photon–phonon interactions

    Get PDF
    The Aharonov–Bohm effect is one of the most intriguing phenomena in both classical and quantum physics, and associates with a number of important and fundamental issues in quantum mechanics. The Aharonov–Bohm effects of charged particles have been experimentally demonstrated and found applications in various fields. Recently, attention has also focused on the Aharonov–Bohm effect for neutral particles, such as photons. Here we propose to utilize the photon–phonon interactions to demonstrate that photonic Aharonov–Bohm effects do exist for photons. By introducing nonreciprocal phases for photons, we observe experimentally a gauge potential for photons in the visible range based on the photon– phonon interactions in acousto-optic crystals, and demonstrate the photonic Aharonov–Bohm effect. The results presented here point to new possibilities to control and manipulate photons by designing an effective gauge potential

    Ray-ONet: efficient 3D reconstruction from a single RGB image

    Get PDF
    We propose Ray-ONet to reconstruct detailed 3D models from monocular images efficiently. By predicting a series of occupancy probabilities along a ray that is back-projected from a pixel in the camera coordinate, our method Ray-ONet improves the reconstruction accuracy in comparison with Occupancy Networks (ONet), while reducing the network inference complexity to O(N2). As a result, Ray-ONet achieves state-of-the-art performance on the ShapeNet benchmark with more than 20× speed-up at 1283 resolution and maintains a similar memory footprint during inference

    ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces

    Full text link
    In recent years, neural implicit surface reconstruction has emerged as a popular paradigm for multi-view 3D reconstruction. Unlike traditional multi-view stereo approaches, the neural implicit surface-based methods leverage neural networks to represent 3D scenes as signed distance functions (SDFs). However, they tend to disregard the reconstruction of individual objects within the scene, which limits their performance and practical applications. To address this issue, previous work ObjectSDF introduced a nice framework of object-composition neural implicit surfaces, which utilizes 2D instance masks to supervise individual object SDFs. In this paper, we propose a new framework called ObjectSDF++ to overcome the limitations of ObjectSDF. First, in contrast to ObjectSDF whose performance is primarily restricted by its converted semantic field, the core component of our model is an occlusion-aware object opacity rendering formulation that directly volume-renders object opacity to be supervised with instance masks. Second, we design a novel regularization term for object distinction, which can effectively mitigate the issue that ObjectSDF may result in unexpected reconstruction in invisible regions due to the lack of constraint to prevent collisions. Our extensive experiments demonstrate that our novel framework not only produces superior object reconstruction results but also significantly improves the quality of scene reconstruction. Code and more resources can be found in \url{https://qianyiwu.github.io/objectsdf++}Comment: ICCV 2023. Project Page: https://qianyiwu.github.io/objectsdf++ Code: https://github.com/QianyiWu/objectsdf_plu

    BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion

    Full text link
    Dense 3D reconstruction from a stream of depth images is the key to many mixed reality and robotic applications. Although methods based on Truncated Signed Distance Function (TSDF) Fusion have advanced the field over the years, the TSDF volume representation is confronted with striking a balance between the robustness to noisy measurements and maintaining the level of detail. We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction. In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy that considers both efficiency and reconstruction quality by design. We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.Comment: Accepted at CVPR 202

    NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior

    Full text link
    Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dramatic camera movement. We tackle this challenging problem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging camera trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation accuracy. Our project page is https://nope-nerf.active.vision

    An Accurate and Efficient Time Delay Estimation Method of Ultra-High Frequency Signals for Partial Discharge Localization in Substations

    Get PDF
    Partial discharge (PD) localization in substations based on the ultra-high frequency (UHF) method can be used to efficiently assess insulation conditions. Localization accuracy is affected by the accuracy of the time delay (TD) estimation, which is critical for PD localization in substations. A review of existing TD estimation methods indicates that there is a need to develop methods that are both accurate and computationally efficient. In this paper, a novel TD estimation method is proposed to improve both accuracy and efficiency. The TD is calculated using an improved cross-correlation algorithm based on full-wavefronts of array UHF signals, which are extracted using the minimum cumulative energy method and zero-crossing points searching methods. The cross-correlation algorithm effectively suppresses the TD error caused by differences between full-wavefronts. To verify the method, a simulated PD source test in a laboratory and a field test in a 220 kV substation were carried out. The results show that the proposed method is accurate even in case of low signal-to-noise ratio, but with greatly improved computational efficiency

    MVDream: Multi-view Diffusion for 3D Generation

    Full text link
    We propose MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. By leveraging image diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets, the resulting multi-view diffusion model can achieve both the generalizability of 2D diffusion and the consistency of 3D data. Such a model can thus be applied as a multi-view prior for 3D generation via Score Distillation Sampling, where it greatly improves the stability of existing 2D-lifting methods by solving the 3D consistency problem. Finally, we show that the multi-view diffusion model can also be fine-tuned under a few shot setting for personalized 3D generation, i.e. DreamBooth3D application, where the consistency can be maintained after learning the subject identity.Comment: Our project page is https://MV-Dream.github.i
    corecore