429 research outputs found

    Influences of Several Insecticides on the Survival of Lysiphlebus japonicus

    Get PDF
    When pesticides are used to control soybean aphids, a fraction of larvae, pupae (mummies) and adults of Lysiphlebus japonicus survive. To understand the influence of pesticides on the development of those surviving parasitoids, we carried out toxicity experiments of pesticides commonly used in the field and surveyed the survival of parasitoids.Originating text in Chinese.Citation: Gao, Junffeng, Zhu, Junyi, Yu, Kai, Ren, Wenhui. (1993). Influences of Several Insecticides on the Survival of Lysiphlebus japonicus. Natural Enemies of Insects, 15(4), 160-161

    Sentiment Analysis and Political Party Classification in 2016 U.S. President Debates in Twitter

    Get PDF
    We introduce a framework of combining tweet sentiment analysis with available default user profiles to classify political party of users who posted tweets in 2016 U.S. president debates. The main works focus on extracting event-related information in short event period instead of collecting tweets in a long-time period as most previous works do. Our framework is not limited in debate event, it can be used by researchers to build rationale of other events study. In sentiment analysis, we show that all three Naïve Bayes classifiers with different distributions obtain accuracy above 75% and the results reveal positive tweets most likely follow Gaussian or Multinomial distributions while negative tweets most likely follow Bernoulli distribution in our training data. We also show that under unbalanced sparse term document setting, instead of using “Add-1” parameter, tuning Laplace smoothing parameter to adjust the weights of new terms in a tweet can help improve the classifier’s performance in targeted direction. Finally, we show sentiment might help classifying political part

    FoodGPT: A Large Language Model in Food Testing Domain with Incremental Pre-training and Knowledge Graph Prompt

    Full text link
    Currently, the construction of large language models in specific domains is done by fine-tuning on a base model. Some models also incorporate knowledge bases without the need for pre-training. This is because the base model already contains domain-specific knowledge during the pre-training process. We build a large language model for food testing. Unlike the above approach, a significant amount of data in this domain exists in Scanning format for domain standard documents. In addition, there is a large amount of untrained structured knowledge. Therefore, we introduce an incremental pre-training step to inject this knowledge into a large language model. In this paper, we propose a method for handling structured knowledge and scanned documents in incremental pre-training. To overcome the problem of machine hallucination, we constructe a knowledge graph to serve as an external knowledge base for supporting retrieval in the large language model. It is worth mentioning that this paper is a technical report of our pre-release version, and we will report our specific experimental data in future versions

    LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning (Practical Experience Report)

    Full text link
    The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored. In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters. An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models. The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source.Comment: Accepted to the 34th IEEE International Symposium on Software Reliability Engineering (ISSRE 2023

    Effects of a PRECEDE-PROCEED Model-Based Intervention on Fatigue in Patients With Coronary Heart Disease: A Randomized Controlled Trial

    Get PDF
    Objective:This research aimed to determine how a 12-week PRECEDE-PROCEED model-based intervention affected fatigue in patients with coronary heart disease.Methods:This cluster randomized controlled trial recruited participants diagnosed with coronary heart disease at 2 community health centers in China. Participants in the control group (n = 36) received routine health education, whereas those in the intervention group (n = 38) were given a 12-week PRECEDE-PROCEED model-based intervention and routine health education. The intervention consisted of 6 training sessions on coronary heart disease, fatigue, fatigue management, self-management skills and social support. A primary outcome (fatigue) and 4 secondary outcomes (knowledge of fatigue, self-management, quality of life and body mass index) were assessed using the Fatigue Scale-14, Fatigue Cognitive Questionnaire for Patients with Coronary Heart Disease, Coronary Artery Disease Self-Management Scale, Chinese Cardiovascular Questionnaire of Quality of Life, and electronic weighing scale, respectively. Data were collected 3 times over 12 weeks.Results:Compared with the control group, the intervention group showed a statistically significant improvement in the level of fatigue (8.72 vs 7.06, P < .001), knowledge of fatigue (P < .001), self-management skills (P < .001), and quality of life (P < .001). However, there was no significant difference in body mass index between the 2 groups (P = .504).Conclusions:The findings suggest that a well-designed intervention based on the PRECEDE-PROCEED model could alleviate fatigue symptoms and increase knowledge of fatigue, self-management skills and quality of life in patients with coronary heart disease
    • …
    corecore