993 research outputs found

    VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion

    Full text link
    Humans can easily imagine the complete 3D geometry of occluded objects and scenes. This appealing ability is vital for recognition and understanding. To enable such capability in AI systems, we propose VoxFormer, a Transformer-based semantic scene completion framework that can output complete 3D volumetric semantics from only 2D images. Our framework adopts a two-stage design where we start from a sparse set of visible and occupied voxel queries from depth estimation, followed by a densification stage that generates dense 3D voxels from the sparse ones. A key idea of this design is that the visual features on 2D images correspond only to the visible scene structures rather than the occluded or empty spaces. Therefore, starting with the featurization and prediction of the visible structures is more reliable. Once we obtain the set of sparse queries, we apply a masked autoencoder design to propagate the information to all the voxels by self-attention. Experiments on SemanticKITTI show that VoxFormer outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics and reduces GPU memory during training to less than 16GB. Our code is available on https://github.com/NVlabs/VoxFormer.Comment: CVPR 2023 Highlight (10% of accepted papers, 2.5% of submissions

    Real-time hallucination simulation and sonification through user-led development of an iPad augmented reality performance

    Get PDF
    The simulation of visual hallucinations has multiple applications. The authors present a new approach to hallucination simulation, initially developed for a performance, that proved to have uses for individuals suffering from certain types of hallucinations. The system, originally developed with a focus on the visual symptoms of palinopsia experienced by the lead author, allows real-time visual expression using augmented reality via an iPad. It also allows the hallucinations to be converted into sound through visuals sonification. Although no formal experimentation was conducted, the authors report on a number of unsolicited informal responses to the simulator from palinopsia sufferers and the Palinopsia Foundation
    corecore