5,426 research outputs found

    Continuous-variable controlled-Z gate using an atomic ensemble

    Full text link
    The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the scheme are still quite high within presently available techniques.Comment: 7 pages, 3 figures, to appear in Physical Review

    Three-dimensional numerical study of flow characteristic and membrane fouling evolution in an enzymatic membrane reactor

    Full text link
    In order to enhance the understanding of membrane fouling mechanism, the hydrodynamics of granular flow in a stirred enzymatic membrane reactor was numerically investigated in the present study. A three-dimensional Euler-Euler model, coupled with k-e mixture turbulence model and drag function for interphase momentum exchange, was applied to simulate the two-phase (fluid-solid) turbulent flow. Numerical simulations of single- or two-phase turbulent flow under various stirring speed were implemented. The numerical results coincide very well with some published experimental data. Results for the distributions of velocity, shear stress and turbulent kinetic energy were provided. Our results show that the increase of stirring speed could not only enlarge the circulation loops in the reactor, but it can also increase the shear stress on the membrane surface and accelerate the mixing process of granular materials. The time evolution of volumetric function of granular materials on the membrane surface has qualitatively explained the evolution of membrane fouling.Comment: 10 panges, 8 figure

    Extending low energy effective field theory with a complete set of dimension-7 operators

    Full text link
    We present a complete and independent set of dimension-7 operators in the low energy effective field theory (LEFT) where the dynamical degrees of freedom are the standard model five quarks and all of the neutral and charged leptons. All operators are non-Hermitian and are classified according to their baryon (ΔB\Delta B) and lepton (ΔL\Delta L) numbers violated. Including Hermitian-conjugated operators, there are in total 31683168, 750750, 588588, 712712 operators with (ΔB,ΔL)=(0,0)(\Delta B,\Delta L)=(0,0), (0,±2)(0,\pm 2), (±1,∓1)(\pm 1,\mp 1), (±1,±1)(\pm 1,\pm 1) respectively. We perform the tree-level matching with the standard model effective field theory (SMEFT) up to dimension-7 (dim-7) operators in both LEFT and SMEFT. As a phenomenological application we study the effective neutrino-photon interactions due to dim-7 lepton number violating operators that are induced and much enhanced at one loop from dim-6 operators that in turn are matched from dim-7 SMEFT operators. We compare the cross sections of various neutrino-photon scattering with their counterparts in the standard model and highlight the new features. Finally we illustrate how these effective interactions could arise from ultraviolet completion.Comment: 16 pages, 3 figure

    ProtChatGPT: Towards Understanding Proteins with Large Language Models

    Full text link
    Protein research is crucial in various fundamental disciplines, but understanding their intricate structure-function relationships remains challenging. Recent Large Language Models (LLMs) have made significant strides in comprehending task-specific knowledge, suggesting the potential for ChatGPT-like systems specialized in protein to facilitate basic research. In this work, we introduce ProtChatGPT, which aims at learning and understanding protein structures via natural languages. ProtChatGPT enables users to upload proteins, ask questions, and engage in interactive conversations to produce comprehensive answers. The system comprises protein encoders, a Protein-Language Pertaining Transformer (PLP-former), a projection adapter, and an LLM. The protein first undergoes protein encoders and PLP-former to produce protein embeddings, which are then projected by the adapter to conform with the LLM. The LLM finally combines user questions with projected embeddings to generate informative answers. Experiments show that ProtChatGPT can produce promising responses to proteins and their corresponding questions. We hope that ProtChatGPT could form the basis for further exploration and application in protein research. Code and our pre-trained model will be publicly available

    Action Sensitivity Learning for the Ego4D Episodic Memory Challenge 2023

    Full text link
    This report presents ReLER submission to two tracks in the Ego4D Episodic Memory Benchmark in CVPR 2023, including Natural Language Queries and Moment Queries. This solution inherits from our proposed Action Sensitivity Learning framework (ASL) to better capture discrepant information of frames. Further, we incorporate a series of stronger video features and fusion strategies. Our method achieves an average mAP of 29.34, ranking 1st in Moment Queries Challenge, and garners 19.79 mean R1, ranking 2nd in Natural Language Queries Challenge. Our code will be released.Comment: Accepted to CVPR 2023 Ego4D Workshop; 1st in Ego4D Moment Queries Challenge; 2nd in Ego4D Natural Language Queries Challeng

    Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation

    Full text link
    Representation Learning on Knowledge Graphs (KGs) is essential for downstream tasks. The dominant approach, KG Embedding (KGE), represents entities with independent vectors and faces the scalability challenge. Recent studies propose an alternative way for parameter efficiency, which represents entities by composing entity-corresponding codewords matched from predefined small-scale codebooks. We refer to the process of obtaining corresponding codewords of each entity as entity quantization, for which previous works have designed complicated strategies. Surprisingly, this paper shows that simple random entity quantization can achieve similar results to current strategies. We analyze this phenomenon and reveal that entity codes, the quantization outcomes for expressing entities, have higher entropy at the code level and Jaccard distance at the codeword level under random entity quantization. Therefore, different entities become more easily distinguished, facilitating effective KG representation. The above results show that current quantization strategies are not critical for KG representation, and there is still room for improvement in entity distinguishability beyond current strategies. The code to reproduce our results is available at https://github.com/JiaangL/RandomQuantization.Comment: Accepted to EMNLP 202

    Benchmarking Large Language Models on Controllable Generation under Diversified Instructions

    Full text link
    While large language models (LLMs) have exhibited impressive instruction-following capabilities, it is still unclear whether and to what extent they can respond to explicit constraints that might be entailed in various instructions. As a significant aspect of LLM alignment, it is thus important to formulate such a specialized set of instructions as well as investigate the resulting behavior of LLMs. To address this vacancy, we propose a new benchmark CoDI-Eval to systematically and comprehensively evaluate LLMs' responses to instructions with various constraints. We construct a large collection of constraints-attributed instructions as a test suite focused on both generalization and coverage. Specifically, we advocate an instruction diversification process to synthesize diverse forms of constraint expression and also deliberate the candidate task taxonomy with even finer-grained sub-categories. Finally, we automate the entire evaluation process to facilitate further developments. Different from existing studies on controllable text generation, CoDI-Eval extends the scope to the prevalent instruction-following paradigm for the first time. We provide extensive evaluations of representative LLMs (e.g., ChatGPT, Vicuna) on CoDI-Eval, revealing their limitations in following instructions with specific constraints and there is still a significant gap between open-source and commercial closed-source LLMs. We believe this benchmark will facilitate research into improving the controllability of LLMs' responses to instructions. Our data and code are available at https://github.com/Xt-cyh/CoDI-Eval.Comment: Accepted to AAAI 202
    • …
    corecore