305 research outputs found

    Implicit Sentiment Analysis Based on Chain of Thought Prompting

    Full text link
    Implicit Sentiment Analysis (ISA) is a crucial research area in natural language processing. Inspired by the idea of large language model Chain of Thought (CoT), this paper introduces a Sentiment Analysis of Thinking (SAoT) framework. The framework first analyzes the implicit aspects and opinions in the text using common sense and thinking chain capabilities. Then, it reflects on the process of implicit sentiment analysis and finally deduces the polarity of sentiment. The model is evaluated on the SemEval 2014 dataset, consisting of 1120 restaurant reviews and 638 laptop reviews. The experimental results demonstrate that the utilization of the ERNIE-Bot-4+SAoT model yields a notable performance improvement. Specifically, on the restaurant dataset, the F1 score reaches 75.27, accompanied by an ISA score of 66.29. Similarly, on the computer dataset, the F1 score achieves 76.50, while the ISA score amounts to 73.46. Comparatively, the ERNIE-Bot-4+SAoT model surpasses the BERTAsp + SCAPt baseline by an average margin of 47.99%

    AutoTest: Evolutionary Code Solution Selection with Test Cases

    Full text link
    With the development of code generation techniques, selecting the correct code solution from multiple candidate solutions has become a crucial task. This study proposes AutoTest, a novel technique that combines automated test case generation with code solution execution to optimize the selection process using an evolutionary genetic algorithm. Firstly, AutoTest utilizes large pre-trained language models such as codegen-16B, code-davinci-002, and incoder-6B to provide code solutions and their corresponding test cases. Then, by executing the code solutions and evaluating their performance on the test cases, a consensus set is formed. Fine-grained ranking is achieved through the selection, mutation, and crossover mechanisms based on the evolutionary genetic algorithm, with the adjustment of alpha and beta parameters. Finally, the best code solution is chosen. AutoTest demonstrates significant performance improvements on the HumanEval benchmark test. The HumanEval dataset consists of 164 programming problems, and AutoTest achieves approximately a 10% improvement over the baseline method in terms of pass@1 score

    Multi-tool Integration Application for Math Reasoning Using Large Language Model

    Full text link
    Mathematical reasoning is an important research direction in the field of artificial intelligence. This article proposes a novel multi tool application framework for mathematical reasoning, aiming to achieve more comprehensive and accurate mathematical reasoning by utilizing the collaborative effect of large language models (LLMs) and multiple external tools. Firstly, use a Math Tool to perform basic mathematical calculations during the inference process through interaction with LLM. Secondly, Code Tool can generate code fragments that comply with syntax rules and execute them, providing support for complex mathematical problems. Then, through the iterative reasoning of the CoT Tool, the logical coherence and accuracy of mathematical reasoning are enhanced. Ultimately, by using self consistency tools to select the final answer based on different parameters, the consistency and reliability of reasoning are improved. Through the synergistic effect of these tools, the framework has achieved significant performance improvement in mathematical reasoning tasks. We conducted experiments on the NumGLUE Task 4 test set, which includes 220 mathematical reasoning fill in the blank questions. The experimental results showed that, based on Math Tool, Code Tool, and CoT Tool, in Task 4 task,our method achieved an accuracy of 89.09,compared with the GPT3+FewShot baseline, Few Shot+ERNIE-4.0+self consistency improved by 49.09%, and compared with fine-tuning the Fine tuning baseline, Few Shot+ERNIE-4.0+self consistency improved by 52.29

    APAview: A Web-Based Platform for Alternative Polyadenylation Analyses in Hematological Cancers

    Get PDF
    Background: Hematologic malignancies, such as acute promyelocytic leukemia (APL) and acute myeloid leukemia (AML), are cancers that start in blood-forming tissues and can affect the blood, bone marrow, and lymph nodes. They are often caused by genetic and molecular alterations such as mutations and gene expression changes. Alternative polyadenylation (APA) is a post-transcriptional process that regulates gene expression, and dysregulation of APA contributes to hematological malignancies. RNA-sequencing-based bioinformatic methods can identify APA sites and quantify APA usages as molecular indexes to study APA roles in disease development, diagnosis, and treatment. Unfortunately, APA data pre-processing, analysis, and visualization are time-consuming, inconsistent, and laborious. A comprehensive, user-friendly tool will greatly simplify processes for APA feature screening and mining. Results: Here, we present APAview, a web-based platform to explore APA features in hematological cancers and perform APA statistical analysis. APAview server runs on Python3 with a Flask framework and a Jinja2 templating engine. For visualization, APAview client is built on Bootstrap and Plotly. Multimodal data, such as APA quantified by QAPA/DaPars, gene expression data, and clinical information, can be uploaded to APAview and analyzed interactively. Correlation, survival, and differential analyses among user-defined groups can be performed via the web interface. Using APAview, we explored APA features in two hematological cancers, APL and AML. APAview can also be applied to other diseases by uploading different experimental data

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Forecast Model of Breast Cancer Diagnosis Based on RF-AdaBoost

    No full text

    Development of an Automatic Lawnmower with Real-Time Computer Vision for Obstacle Avoidance

    Full text link
    This work develops an automatic lawnmower (Auto-Lawnmower) using computer vision technology for obstacle avoidance. Several critical issues have been overcome in this development work, including the development of a simplified convolutional neural network (CNN) for decision making, and sufficiently large datasets needed to train the Auto-Lawnmower. The following key strategies are adapted to ensure necessary functionality and efficacy with minimum cost, considering possible mass production of Auto-Lawnmowers. First, we use at-time avoidance strategy: meaning the Auto-Lawnmower makes turns when it encounters an obstacle. This is possible because of the fact that a lawn mower usually can move at a very low speed (walking speed of a man). Second, we decided to use minimum necessary sensors, so as to make the system of our Auto-Lawnmower as simple as possible and hence can be very practical. A single monocular video camera is, therefore, used as the single sensor for both: (1) to generate real-time movie images of a large number of situations that the Auto-Lawnmower may encounter, which is labelled and builds up the training datasets; (2) to capture the real-time image when the trained Auto-Lawnmower is in action, to decide on either to avoid the obstacle or move forward while cutting the grass. A Law dataset with labels, called LawNet, has been established for training of CNNs. It collects a total of 168,542 labelled images taken from the perspective of our Auto-Lawnmower for lawns at the university campus. A concise CNN is created for high efficiency and trained with our LawNet to drive the Auto-Lawnmower. A prototype of Auto-Lawnmower is finally designed, built and tested for lawn mowing in real situations. It is found that it works well as designed, with minimum sensors (a single camera), and hence it has a good potential for mass production. </jats:p
    corecore