655 research outputs found

    Transporting long-lived quantum spin coherence in a photonic crystal fiber

    Full text link
    Confining particles in hollow-core photonic crystal fibers has opened up new prospects to scale up the distance and time over which particles can be made to interact with light. However, maintaining long-lived quantum spin coherence and/or transporting it over macroscopic distances in a waveguide remain challenging. Here, we demonstrate coherent guiding of ground-state superpositions of 85Rb atoms over a centimeter range and hundreds of milliseconds inside a hollow-core photonic crystal fiber. The decoherence is mainly due to dephasing from residual differential light shift (DLS) from the optical trap and the inhomogeneity of ambient magnetic field. Our experiment establishes an important step towards a versatile platform that can lead to applications in quantum information networks and matter wave circuit for quantum sensing.Comment: Accepted by Physical Review Letter

    Text-to-3D using Gaussian Splatting

    Full text link
    In this paper, we present Gaussian Splatting based text-to-3D generation (GSGEN), a novel approach for generating high-quality 3D objects. Previous methods suffer from inaccurate geometry and limited fidelity due to the absence of 3D prior and proper representation. We leverage 3D Gaussian Splatting, a recent state-of-the-art representation, to address existing shortcomings by exploiting the explicit nature that enables the incorporation of 3D prior. Specifically, our method adopts a progressive optimization strategy, which includes a geometry optimization stage and an appearance refinement stage. In geometry optimization, a coarse representation is established under a 3D geometry prior along with the ordinary 2D SDS loss, ensuring a sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians undergo an iterative refinement to enrich details. In this stage, we increase the number of Gaussians by compactness-based densification to enhance continuity and improve fidelity. With these designs, our approach can generate 3D content with delicate details and more accurate geometry. Extensive evaluations demonstrate the effectiveness of our method, especially for capturing high-frequency components. Video results are provided at https://gsgen3d.github.io. Our code is available at https://github.com/gsgen3d/gsgenComment: Project page: https://gsgen3d.github.io. Code: https://github.com/gsgen3d/gsge

    TabuLa: Harnessing Language Models for Tabular Data Synthesis

    Full text link
    Given the ubiquitous use of tabular data in industries and the growing concerns in data privacy and security, tabular data synthesis emerges as a critical research area. The recent state-of-the-art methods show that large language models (LLMs) can be adopted to generate realistic tabular data. As LLMs pre-process tabular data as full text, they have the advantage of avoiding the curse of dimensionality associated with one-hot encoding high-dimensional data. However, their long training time and limited re-usability on new tasks prevent them from replacing exiting tabular generative models. In this paper, we propose Tabula, a tabular data synthesizer based on the language model structure. Through Tabula, we demonstrate the inherent limitation of employing pre-trained language models designed for natural language processing (NLP) in the context of tabular data synthesis. Our investigation delves into the development of a dedicated foundational model tailored specifically for tabular data synthesis. Additionally, we propose a token sequence compression strategy to significantly reduce training time while preserving the quality of synthetic data. Extensive experiments on six datasets demonstrate that using a language model structure without loading the well-trained model weights yields a better starting model for tabular data synthesis. Moreover, the Tabula model, previously trained on other tabular data, serves as an excellent foundation model for new tabular data synthesis tasks. Additionally, the token sequence compression method substantially reduces the model's training time. Results show that Tabula averagely reduces 46.2% training time per epoch comparing to current LLMs-based state-of-the-art algorithm and consistently achieves even higher synthetic data utility
    • …
    corecore