39 research outputs found

    LivePhoto: Real Image Animation with Text-guided Motion Control

    Full text link
    Despite the recent progress in text-to-video generation, existing studies usually overlook the issue that only spatial contents but not temporal motions in synthesized videos are under the control of text. Towards such a challenge, this work presents a practical system, named LivePhoto, which allows users to animate an image of their interest with text descriptions. We first establish a strong baseline that helps a well-learned text-to-image generator (i.e., Stable Diffusion) take an image as a further input. We then equip the improved generator with a motion module for temporal modeling and propose a carefully designed training pipeline to better link texts and motions. In particular, considering the facts that (1) text can only describe motions roughly (e.g., regardless of the moving speed) and (2) text may include both content and motion descriptions, we introduce a motion intensity estimation module as well as a text re-weighting module to reduce the ambiguity of text-to-motion mapping. Empirical evidence suggests that our approach is capable of well decoding motion-related textual instructions into videos, such as actions, camera movements, or even conjuring new contents from thin air (e.g., pouring water into an empty glass). Interestingly, thanks to the proposed intensity learning mechanism, our system offers users an additional control signal (i.e., the motion intensity) besides text for video customization.Comment: Project page: https://xavierchen34.github.io/LivePhoto-Page

    Towards Understanding the Capability of Large Language Models on Code Clone Detection: A Survey

    Full text link
    Code cloning, the duplication of code fragments, is common in software development. While some reuse aids productivity, excessive cloning hurts maintainability and introduces bugs. Hence, automatic code clone detection is vital. Meanwhile, large language models (LLMs) possess diverse code-related knowledge, making them versatile for various software engineering challenges. However, LLMs' performance in code clone detection is unclear and needs more study for accurate assessment. In this paper, we provide the first comprehensive evaluation of LLMs for clone detection, covering different clone types, languages, and prompts. We find advanced LLMs excel in detecting complex semantic clones, surpassing existing methods. Adding intermediate reasoning steps via chain-of-thought prompts noticeably enhances performance. Additionally, representing code as vector embeddings, especially with text encoders, effectively aids clone detection.Lastly, the ability of LLMs to detect code clones differs among various programming languages. Our study suggests that LLMs have potential for clone detection due to their language capabilities, offering insights for developing robust LLM-based methods to enhance software engineering.Comment: 13 pages, 3 figure

    TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models

    Full text link
    Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety. However, the continual learning aspect of these aligned LLMs has been largely overlooked. Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs, owing to both their simplicity and the models' potential exposure during instruction tuning. In this paper, we introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs. TRACE consists of 8 distinct datasets spanning challenging tasks including domain-specific tasks, multilingual capabilities, code generation, and mathematical reasoning. All datasets are standardized into a unified format, allowing for effortless automatic evaluation of LLMs. Our experiments show that after training on TRACE, aligned LLMs exhibit significant declines in both general ability and instruction-following capabilities. For example, the accuracy of llama2-chat 13B on gsm8k dataset declined precipitously from 28.8\% to 2\% after training on our datasets. This highlights the challenge of finding a suitable tradeoff between achieving performance on specific tasks while preserving the original prowess of LLMs. Empirical findings suggest that tasks inherently equipped with reasoning paths contribute significantly to preserving certain capabilities of LLMs against potential declines. Motivated by this, we introduce the Reasoning-augmented Continual Learning (RCL) approach. RCL integrates task-specific cues with meta-rationales, effectively reducing catastrophic forgetting in LLMs while expediting convergence on novel tasks

    StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback

    Full text link
    The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we introduce StepCoder, a novel RL framework for code generation, consisting of two main components: CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks, while FGO only optimizes the model by masking the unexecuted code segments to provide Fine-Grained Optimization. In addition, we furthermore construct the APPS+ dataset for RL training, which is manually verified to ensure the correctness of unit tests. Experimental results show that our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks. Our dataset APPS+ and StepCoder are available online.Comment: 13 pages, 5 figure

    Secrets of RLHF in Large Language Models Part I: PPO

    Full text link
    Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO code

    Secrets of RLHF in Large Language Models Part II: Reward Modeling

    Full text link
    Reinforcement Learning from Human Feedback (RLHF) has become a crucial technology for aligning language models with human values and intentions, enabling models to produce more helpful and harmless responses. Reward models are trained as proxies for human preferences to drive reinforcement learning optimization. While reward models are often considered central to achieving high performance, they face the following challenges in practical applications: (1) Incorrect and ambiguous preference pairs in the dataset may hinder the reward model from accurately capturing human intent. (2) Reward models trained on data from a specific distribution often struggle to generalize to examples outside that distribution and are not suitable for iterative RLHF training. In this report, we attempt to address these two issues. (1) From a data perspective, we propose a method to measure the strength of preferences within the data, based on a voting mechanism of multiple reward models. Experimental results confirm that data with varying preference strengths have different impacts on reward model performance. We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data. (2) From an algorithmic standpoint, we introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses, thereby improving model generalization. Furthermore, we employ meta-learning to enable the reward model to maintain the ability to differentiate subtle differences in out-of-distribution samples, and this approach can be utilized for iterative RLHF optimization

    Intervening Effects of Total Alkaloids of Corydalis saxicola Bunting on Rats With Antibiotic-Induced Gut Microbiota Dysbiosis Based on 16S rRNA Gene Sequencing and Untargeted Metabolomics Analyses

    Get PDF
    Gut microbiota dysbiosis induced by antibiotics is strongly connected with health concerns. Studying the mechanisms underlying antibiotic-induced gut microbiota dysbiosis could help to identify effective drugs and prevent many serious diseases. In this study, in rats with antibiotic-induced gut microbiota dysbiosis treated with total alkaloids of Corydalis saxicola Bunting (TACS), urinary and fecal biochemical changes and cecum microbial diversity were investigated using 16S rRNA gene sequencing analysis and untargeted metabolomics. The microbial diversity results showed that 10 genera were disturbed by the antibiotic treatment, and two of them were obviously restored by TACS. The untargeted metabolomics analysis identified 34 potential biomarkers in urine and feces that may be the metabolites that are most related to the mechanisms underlying antibiotic-induced gut microbiota dysbiosis and the therapeutic effects of TACS treatment. The biomarkers were involved in six metabolic pathways, comprising pathways related to branched-chain amino acid (BCAA), bile acid, arginine and proline, purine, aromatic amino acid, and amino sugar and nucleotide sugar metabolism. Notably, there was a strong correlation between these metabolic pathways and two gut microbiota genera (g__Blautia and g__Intestinibacter). The correlation analysis suggested that TACS might synergistically affect four of these metabolic pathways (BCAA, bile acid, arginine and proline, and purine metabolism), thereby modulating gut microbiota dysbiosis. Furthermore, we performed a molecular docking analysis involving simulating high-precision docking and using molecular pathway maps to illuminate the way that ligands (the five main alkaloid components of TACS) act on a complex molecular network, using CYP27A1 (a key enzyme in the bile acid synthesis pathway) as the target protein. This study provides a comprehensive overview of the intervening effects of TACS on the host metabolic phenotype and gut microbiome in rats with gut microbiota dysbiosis, and it presents new insights for the discovery of effective drugs and the best therapeutic approaches

    Effects of Understory Vegetation Heterogeneity on Soil Organic Carbon Components in Cunninghamia lanceolata Plantation

    No full text
    As one of the important factors affecting forest soil organic carbon stocks, the effect of understory vegetation types on soil organic carbon and its components was explored to provide a theoretical basis for understory vegetation management and sustainable management in plantation forests. In order to determine the characteristics of soil organic carbon and its components under different understory vegetation types in Subtropical Cunninghamia lanceolata plantation, Indocalamus tessellatus, Diplazium donianum and Oreocnide frutescenssp were taken as research objects. The mass fractions of total organic carbon, recalcitrant organic carbon, readily oxidizable organic carbon, microbial biomass carbon and dissolved organic carbon in each soil layer at 0–10, 10–20, 20–40 and 40–60 cm were measured, and the change characteristics of soil organic carbon components were also studied and compared. The results showed that: (1) The mass fractions of total organic carbon, recalcitrant organic carbon, readily oxidizable organic carbon and microbial biomass carbon in the soils of the three understory vegetation types showed significant decreasing trends along the profile, while the mass fraction of dissolved organic carbon in 0–40 cm soil layer was significantly higher than those in 40–60 cm soil layer. (2) The mass fraction of total organic carbon (5.98–20.66 g·kg−1) had no significant difference among understory vegetation types. The mass fraction and proportion of microbial biomass carbon were higher in the 0–60 cm soil layer under cover of Indocalamus tessellatus, and the mass fractions of recalcitrant organic carbon in the 20–40 cm soil layer under Indocalamus tessellatus cover (8.57 g·kg−1) was significantly higher than that of Oreocnide frutescenssp (5.73 g·kg−1). The soil layer of 0–20 cm under the Diplazium donianum community has a higher mass fraction and proportion of readily oxidizable organic carbon. (3) Correlation analysis showed that soil organic carbon and its components were positively correlated with total nitrogen, dissolved total nitrogen, dissolved organic nitrogen and microbial biomass nitrogen. There is a significant positive correlation among the components of soil organic carbon. (4) Redundancy analysis showed that soil bulk density (41.6%), microbial biomass nitrogen (41.2%), dissolved total nitrogen (43.7%), total nitrogen (9.9%), dissolved organic nitrogen (43.6%) and pH (6.6%) were the most significant environmental factors affecting organic carbon components in four soil layers. Understory vegetation type can influence the distribution characteristics of soil organic carbon components in Cunninghamia lanceolata plantation, and soil active organic carbon components are more susceptible to the influence of understory vegetation type than total organic carbon and recalcitrant organic carbon
    corecore