289 research outputs found

    On pressure and velocity flow boundary conditions and bounceback for the lattice Boltzmann BGK model

    Full text link
    Pressure (density) and velocity boundary conditions inside a flow domain are studied for 2-D and 3-D lattice Boltzmann BGK models (LBGK) and a new method to specify these conditions are proposed. These conditions are constructed in consistency of the wall boundary condition based on an idea of bounceback of non-equilibrium distribution. When these conditions are used together with the improved incompressible LBGK model by Zou et al., the simulation results recover the analytical solution of the plane Poiseuille flow driven by pressure (density) difference with machine accuracy. Since the half-way wall bounceback boundary condition is very easy to implement and was shown theoretically to give second-order accuracy for the 2-D Poiseuille flow with forcing, it is used with pressure (density) inlet/outlet conditions proposed in this paper and in Chen et al. to study the 2-D Poiseuille flow and the 3-D square duct flow. The numerical results are approximately second-order accurate. The magnitude of the error of the half-way wall bounceback is comparable with that using some other published boundary conditions. Besides, the bounceback condition has a much better stability behavior than that of other boundary conditions.Comment: 18 pages, one figur

    Psychological mechanisms of English academic stress and academic burnout: the mediating role of rumination and moderating effect of neuroticism

    Get PDF
    IntroductionAcademic stress is a significant and prevalent phenomenon among college students. According to the Demands-Resources Model, when individuals are unable to cope with stress that exceeds their capacity, burnout may occur. Although English courses hold a significant position in university education, there has been limited research on the mechanisms linking English academic stress to English academic burnout.MethodsThis study recruited 1,130 undergraduate students taking English courses. Participants completed online questionnaires assessing English academic stress, rumination, English academic burnout, and neuroticism traits. A moderated mediation model was constructed to examine the relationship among these variables.ResultsThe results indicate that (1) Rumination serves as a mediator in the relationship between English academic stress and burnout; (2) neuroticism significantly moderates the pathway between English academic stress and rumination. Specifically, students with high neuroticism tendencies are more prone to developing rumination when faced with high levels of English academic stress.ConclusionThese findings offer valuable insights into the psychological mechanisms underlying the association between English learning stress and academic burnout. They emphasize the importance of addressing rumination as a mediator and considering individuals’ levels of neuroticism in interventions aimed at preventing and alleviating academic burnout among university students

    Occurrence and molecular characterization of Cryptosporidium in dogs in Henan Province, China

    Get PDF
    BACKGROUND: Cryptosporidiosis in dogs has been reported worldwide, involving both asymptomatic and diarrheic dogs. Large-scale surveys of Cryptosporidium infection in dogs have been performed in some countries using differents diagnostic methods. But, few data are available on the infection rate and molecular characteristics of Cryptosporidium spp. in dogs in China. RESULT: In this study, 770 fecal samples from 66 locations in Henan Province were examined. The average Cryptosporidium infection rate was 3.8%, with dogs in kennels having the highest rate of 7.0% (Ο‡(2) = 14.82, P < 0.01). The infection rate was 8.0% in dogs younger than 90 days, which was significantly higher than that in the other age groups (1.1–3.8%;Ο‡(2) = 18.82, P < 0.01). No association was noted between the infection rate and the sex of the dogs. Twenty-nine Cryptosporidium-positive samples were amplified at the small subunit rRNA (SSU rRNA), 70-kDa heat shock protein (HSP70), and actin loci using PCR. Sequence analysis of these amplicons identified only Cryptosporidium canis, which showed 100% identity with the published sequences of the SSU rRNA, HSP70, and actin genes. CONCLUSIONS: Our results confirm that C. canis is popular in the dog population in China, considering the large number of dogs in China and the close contact between dogs and humans, the role of C. canis in the transmission of human cryptosporidiosis warrants attention

    ShareGPT4V: Improving Large Multi-Modal Models with Better Captions

    Full text link
    In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community.Comment: Project: https://ShareGPT4V.github.i

    MLLM-DataEngine: An Iterative Refinement Approach for MLLM

    Full text link
    Despite the great advance of Multimodal Large Language Models (MLLMs) in both instruction dataset building and benchmarking, the independence of training and evaluation makes current MLLMs hard to further improve their capability under the guidance of evaluation results with a relatively low human cost. In this paper, we propose MLLM-DataEngine, a novel closed-loop system that bridges data generation, model training, and evaluation. Within each loop iteration, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results, then generate a proper incremental dataset for the next training iteration and enhance the model capability iteratively. Compared with previous data collection methods which are separate from the benchmarking, the data generated by MLLM-DataEngine shows better targeting, quality, and correctness. For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data within each incremental dataset based on the benchmarking results. For quality, we resort to GPT-4 to generate high-quality data with each given data type. For correctness, prompt design is critical for the data generation results. Rather than previous hand-crafted prompt, we propose an Interactive Prompt Optimization strategy, which optimizes the prompt with the multi-round interaction between human and GPT, and improve the correctness of generated data greatly. Through extensive experiments, we find our MLLM-DataEngine could boost the MLLM capability in a targeted and automatic manner, with only a few human participation. We hope it could be a general solution for the following MLLMs building. The MLLM-DataEngine has been open-sourced and is now available at https://github.com/opendatalab/MLLM-DataEngine.Comment: Code and models are available at https://github.com/opendatalab/MLLM-DataEngin

    SongComposer: A Large Language Model for Lyric and Melody Composition in Song Generation

    Full text link
    We present SongComposer, an innovative LLM designed for song composition. It could understand and generate melodies and lyrics in symbolic song representations, by leveraging the capability of LLM. Existing music-related LLM treated the music as quantized audio signals, while such implicit encoding leads to inefficient encoding and poor flexibility. In contrast, we resort to symbolic song representation, the mature and efficient way humans designed for music, and enable LLM to explicitly compose songs like humans. In practice, we design a novel tuple design to format lyric and three note attributes (pitch, duration, and rest duration) in the melody, which guarantees the correct LLM understanding of musical symbols and realizes precise alignment between lyrics and melody. To impart basic music understanding to LLM, we carefully collected SongCompose-PT, a large-scale song pretraining dataset that includes lyrics, melodies, and paired lyrics-melodies in either Chinese or English. After adequate pre-training, 10K carefully crafted QA pairs are used to empower the LLM with the instruction-following capability and solve diverse tasks. With extensive experiments, SongComposer demonstrates superior performance in lyric-to-melody generation, melody-to-lyric generation, song continuation, and text-to-song creation, outperforming advanced LLMs like GPT-4.Comment: project page: https://pjlab-songcomposer.github.io/ code: https://github.com/pjlab-songcomposer/songcompose
    • …
    corecore