416 research outputs found

    Engineering Photon Delocalization in a Rabi Dimer with a Dissipative Bath

    Full text link
    A Rabi dimer is used to model a recently reported circuit quantum electrodynamics system composed of two coupled transmission-line resonators with each coupled to one qubit. In this study, a phonon bath is adopted to mimic the multimode micromechanical resonators and is coupled to the qubits in the Rabi dimer. The dynamical behavior of the composite system is studied by the Dirac-Frenkel time-dependent variational principle combined with the multiple Davydov D2_{2} ans\"{a}tze. Initially all the photons are pumped into the left resonator, and the two qubits are in the down state coupled with the phonon vacuum. In the strong qubit-photon coupling regime, the photon dynamics can be engineered by tuning the qubit-bath coupling strength α\alpha and photon delocalization is achieved by increasing α\alpha. In the absence of dissipation, photons are localized in the initial resonator. Nevertheless, with moderate qubit-bath coupling, photons are delocalized with quasiequilibration of the photon population in two resonators at long times. In this case, high frequency bath modes are activated by interacting with depolarized qubits. For strong dissipation, photon delocalization is achieved via frequent photon-hopping within two resonators and the qubits are suppressed in their initial down state.Comment: 11 pages, 11 figure

    Synthesis of ultrathin platinum nanoplates for enhanced oxygen reduction activity.

    Get PDF
    Ultrathin Pt nanostructures exposing controlled crystal facets are highly desirable for their superior activity and cost-effectiveness in the electrocatalytic oxygen reduction reaction (ORR), and they are conventionally synthesized by epitaxial growth of Pt on a limited range of templates, such as Pd nanocrystals, resulting in a high cost and less structural diversity of the ultrathin Pt nanostructures. To solve this problem, we demonstrate that ultrathin Pt nanostructures can be synthesized by templating conveniently available Ag nanocrystals without involving galvanic replacement, which enables a much-reduced cost and controllable new morphologies, such as ultrathin Pt nanoplates that expose the {111} facets. The resulting ultrathin Pt nanoplates are ∼1-2 nm in thickness, which show an ∼22-fold increase in specific activity (5.3 mA cm-2), an ∼9.5-fold increase in mass activity (1.62 A mg-1) and significantly enhanced catalytic stability in the ORR, compared with the commercial Pt/C catalyst. We believe this strategy opens a door to a highly extendable family of ultrathin noble metal nanostructures, thus promising excellent activity and stability in a broad range of catalytic applications

    Efficient RLHF: Reducing the Memory Usage of PPO

    Full text link
    Reinforcement Learning with Human Feedback (RLHF) has revolutionized language modeling by aligning models with human preferences. However, the RL stage, Proximal Policy Optimization (PPO), requires over 3x the memory of Supervised Fine-Tuning (SFT), making it infeasible to use for most practitioners. To address this issue, we present a comprehensive analysis the memory usage, performance, and training time of memory-savings techniques for PPO. We introduce Hydra-RLHF by first integrating the SFT and Reward models and then dynamically turning LoRA "off" during training. Our experiments show: 1. Using LoRA during PPO reduces its memory usage to be smaller than SFT while improving alignment across four public benchmarks, and 2. Hydra-PPO reduces the latency per sample of LoRA-PPO by up to 65% while maintaining its performance. Our results demonstrate that Hydra-PPO is a simple and promising solution for enabling more widespread usage of RLHF

    Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models

    Full text link
    While Multi-modal Language Models (MLMs) demonstrate impressive multimodal ability, they still struggle on providing factual and precise responses for tasks like visual question answering (VQA). In this paper, we address this challenge from the perspective of contextual information. We propose Causal Context Generation, Causal-CoG, which is a prompting strategy that engages contextual information to enhance precise VQA during inference. Specifically, we prompt MLMs to generate contexts, i.e, text description of an image, and engage the generated contexts for question answering. Moreover, we investigate the advantage of contexts on VQA from a causality perspective, introducing causality filtering to select samples for which contextual information is helpful. To show the effectiveness of Causal-CoG, we run extensive experiments on 10 multimodal benchmarks and show consistent improvements, e.g., +6.30% on POPE, +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding, surpassing existing methods. We hope Casual-CoG inspires explorations of context knowledge in multimodal models, and serves as a plug-and-play strategy for MLM decoding

    Noise reduction optimization of sound sensor based on a Conditional Generation Adversarial Network

    Get PDF
    To address the problems in the traditional speech signal noise elimination methods, such as the residual noise, poor real-time performance and narrow applications a new method is proposed to eliminate network voice noise based on deep learning of conditional generation adversarial network. In terms of the perceptual evaluation of speech quality (PESQ) and shorttime objective intelligibility measure (STOI) functions used as the loss function in the neural network, which were used as the loss function in the neural network, the flexibility of the whole network was optimized, and the training process of the model simplified. The experimental results indicate that, under the noisy environment, especially in a restaurant, the proposed noise reduction scheme improves the STOI score by 26.23% and PESQ score by 17.18%, respectively, compared with the traditional Wiener noise reduction algorithm. Therefore, the sound sensor\u27s noise reduction scheme through our approach has achieved a remarkable noise reduction effect, more useful information transmission, and stronger practicability

    Adapting LLM Agents Through Communication

    Full text link
    Recent advancements in large language models (LLMs) have shown potential for human-like agents. To help these agents adapt to new tasks without extensive human supervision, we propose the Learning through Communication (LTC) paradigm, a novel training approach enabling LLM agents to improve continuously through interactions with their environments and other agents. Recent advancements in large language models (LLMs) have shown potential for human-like agents. To help these agents adapt to new tasks without extensive human supervision, we propose the Learning through Communication (LTC) paradigm, a novel training approach enabling LLM agents to improve continuously through interactions with their environments and other agents. Through iterative exploration and PPO training, LTC empowers the agent to assimilate short-term experiences into long-term memory. To optimize agent interactions for task-specific learning, we introduce three structured communication patterns: Monologue, Dialogue, and Analogue-tailored for common tasks such as decision-making, knowledge-intensive reasoning, and numerical reasoning. We evaluated LTC on three datasets: ALFWorld (decision-making), HotpotQA (knowledge-intensive reasoning), and GSM8k (numerical reasoning). On ALFWorld, it exceeds the instruction tuning baseline by 12% in success rate. On HotpotQA, LTC surpasses the instruction-tuned LLaMA-7B agent by 5.1% in EM score, and it outperforms the instruction-tuned 9x larger PaLM-62B agent by 0.6%. On GSM8k, LTC outperforms the CoT-Tuning baseline by 3.6% in accuracy. The results showcase the versatility and efficiency of the LTC approach across diverse domains. We will open-source our code to promote further development of the community.Comment: Preprin
    • …
    corecore