439 research outputs found

    Boundary effect and dressed states of a giant atom in a topological waveguide

    Full text link
    The interaction between the quantum emitter and topological photonic system makes the photon behave in exotic ways. We here study the properties of a giant atom coupled to two sites of a one-dimensional topological waveguide, which is described by the Su-Schrieffer-Heeger (SSH) chain. We find that the giant atom can act as an effective boundary and induce the chiral zero modes, which are similar to those in the SSH model with open boundary, for the waveguide under the periodical boundary. Except for the boundary effect, we also find that the giant atom can lift energy degeneracy inside the energy bands of the SSH chain and adjust spatial symmetry of the photon distributions for the states of the dressed giant atom and waveguide. That is, the giant atom can be used to change the properties of the topological environment. Our work may stimulate more studies on the interaction between matter and topological environment.Comment: 7 Pages, 4 Figure

    DILF: Differentiable Rendering-Based Multi-View Image-Language Fusion for Zero-Shot 3D Shape Understanding

    Get PDF
    Zero-shot 3D shape understanding aims to recognize “unseen” 3D categories that are not present in training data. Recently, Contrastive Language–Image Pre-training (CLIP) has shown promising open-world performance in zero-shot 3D shape understanding tasks by information fusion among language and 3D modality. It first renders 3D objects into multiple 2D image views and then learns to understand the semantic relationships between the textual descriptions and images, enabling the model to generalize to new and unseen categories. However, existing studies in zero-shot 3D shape understanding rely on predefined rendering parameters, resulting in repetitive, redundant, and low-quality views. This limitation hinders the model’s ability to fully comprehend 3D shapes and adversely impacts the text–image fusion in a shared latent space. To this end, we propose a novel approach called Differentiable rendering-based multi-view Image–Language Fusion (DILF) for zero-shot 3D shape understanding. Specifically, DILF leverages large-scale language models (LLMs) to generate textual prompts enriched with 3D semantics and designs a differentiable renderer with learnable rendering parameters to produce representative multi-view images. These rendering parameters can be iteratively updated using a text–image fusion loss, which aids in parameters’ regression, allowing the model to determine the optimal viewpoint positions for each 3D object. Then a group-view mechanism is introduced to model interdependencies across views, enabling efficient information fusion to achieve a more comprehensive 3D shape understanding. Experimental results can demonstrate that DILF outperforms state-of-the-art methods for zero-shot 3D classification while maintaining competitive performance for standard 3D classification. The code is available at https://github.com/yuzaiyang123/DILP

    Experimental preparation and verification of quantum money

    Full text link
    A quantum money scheme enables a trusted bank to provide untrusted users with verifiable quantum banknotes that cannot be forged. In this work, we report an experimental demonstration of the preparation and verification of unforgeable quantum banknotes. We employ a security analysis that takes experimental imperfections fully into account. We measure a total of 3.6×1063.6\times 10^6 states in one verification round, limiting the forging probability to 10710^{-7} based on the security analysis. Our results demonstrate the feasibility of preparing and verifying quantum banknotes using currently available experimental techniques.Comment: 12 pages, 4 figure

    Investigation on data fusion of sun-induced chlorophyll fluorescence and reflectance for photosynthetic capacity of rice

    Full text link
    Studying crop photosynthesis is crucial for improving yield, but current methods are labor-intensive. This research aims to enhance accuracy by combining leaf reflectance and sun-induced chlorophyll fluorescence (SIF) signals to estimate key photosynthetic traits in rice. The study analyzes 149 leaf samples from two rice cultivars, considering reflectance, SIF, chlorophyll, carotenoids, and CO2 response curves. After noise removal, SIF and reflectance spectra are used for data fusion at different levels (raw, feature, and decision). Competitive adaptive reweighted sampling (CARS) extracts features, and partial least squares regression (PLSR) builds regression models. Results indicate that using either reflectance or SIF alone provides modest estimations for photosynthetic traits. However, combining these data sources through measurement-level data fusion significantly improves accuracy, with mid-level and decision-level fusion also showing positive outcomes. In particular, decision-level fusion enhances predictive capabilities, suggesting the potential for efficient crop phenotyping. Overall, sun-induced chlorophyll fluorescence spectra effectively predict rice's photosynthetic capacity, and data fusion methods contribute to increased accuracy, paving the way for high-throughput crop phenotyping

    A Neural-Guided Dynamic Symbolic Network for Exploring Mathematical Expressions from Data

    Full text link
    Symbolic regression (SR) is a powerful technique for discovering the underlying mathematical expressions from observed data. Inspired by the success of deep learning, recent efforts have focused on two categories for SR methods. One is using a neural network or genetic programming to search the expression tree directly. Although this has shown promising results, the large search space poses difficulties in learning constant factors and processing high-dimensional problems. Another approach is leveraging a transformer-based model training on synthetic data and offers advantages in inference speed. However, this method is limited to fixed small numbers of dimensions and may encounter inference problems when given data is out-of-distribution compared to the synthetic data. In this work, we propose DySymNet, a novel neural-guided Dynamic Symbolic Network for SR. Instead of searching for expressions within a large search space, we explore DySymNet with various structures and optimize them to identify expressions that better-fitting the data. With a topology structure like neural networks, DySymNet not only tackles the challenge of high-dimensional problems but also proves effective in optimizing constants. Based on extensive numerical experiments using low-dimensional public standard benchmarks and the well-known SRBench with more variables, our method achieves state-of-the-art performance in terms of fitting accuracy and robustness to noise

    YOLO-FaceV2: A Scale and Occlusion Aware Face Detector

    Full text link
    In recent years, face detection algorithms based on deep learning have made great progress. These algorithms can be generally divided into two categories, i.e. two-stage detector like Faster R-CNN and one-stage detector like YOLO. Because of the better balance between accuracy and speed, one-stage detectors have been widely used in many applications. In this paper, we propose a real-time face detector based on the one-stage detector YOLOv5, named YOLO-FaceV2. We design a Receptive Field Enhancement module called RFE to enhance receptive field of small face, and use NWD Loss to make up for the sensitivity of IoU to the location deviation of tiny objects. For face occlusion, we present an attention module named SEAM and introduce Repulsion Loss to solve it. Moreover, we use a weight function Slide to solve the imbalance between easy and hard samples and use the information of the effective receptive field to design the anchor. The experimental results on WiderFace dataset show that our face detector outperforms YOLO and its variants can be find in all easy, medium and hard subsets. Source code in https://github.com/Krasjet-Yu/YOLO-FaceV
    corecore