9 research outputs found
DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning
Recent advances in natural language processing, primarily propelled by Large
Language Models (LLMs), have showcased their remarkable capabilities grounded
in in-context learning. A promising avenue for guiding LLMs in intricate
reasoning tasks involves the utilization of intermediate reasoning steps within
the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies
in the effective selection of exemplars for facilitating in-context learning.
In this study, we introduce a framework that leverages Dual Queries and
Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars
for in-context learning. Dual Queries first query LLM to obtain LLM-generated
knowledge such as CoT, then query the retriever to obtain the final exemplars
via both question and the knowledge. Moreover, for the second query, LoRe
employs dimensionality reduction techniques to refine exemplar selection,
ensuring close alignment with the input question's knowledge. Through extensive
experiments, we demonstrate that DQ-LoRe significantly outperforms prior
state-of-the-art methods in the automatic selection of exemplars for GPT-4,
enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further
reveals that DQ-LoRe consistently outperforms retrieval-based approaches in
terms of both performance and adaptability, especially in scenarios
characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context
learning and opens up new avenues for addressing complex reasoning challenges.
Our code is released at
https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe.Comment: Accepted in ICLR 202
TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models
Automated theorem proving (ATP) has become an appealing domain for exploring
the reasoning ability of the recent successful generative language models.
However, current ATP benchmarks mainly focus on symbolic inference, but rarely
involve the understanding of complex number combination reasoning. In this
work, we propose TRIGO, an ATP benchmark that not only requires a model to
reduce a trigonometric expression with step-by-step proofs but also evaluates a
generative LM's reasoning ability on formulas and its capability to manipulate,
group, and factor number terms. We gather trigonometric expressions and their
reduced forms from the web, annotate the simplification process manually, and
translate it into the Lean formal language system. We then automatically
generate additional examples from the annotated samples to expand the dataset.
Furthermore, we develop an automatic generator based on Lean-Gym to create
dataset splits of varying difficulties and distributions in order to thoroughly
analyze the model's generalization ability. Our extensive experiments show our
proposed TRIGO poses a new challenge for advanced generative LM's including
GPT-4 which is pre-trained on a considerable amount of open-source formal
theorem-proving language data, and provide a new tool to study the generative
LM's ability on both formal and mathematical reasoning.Comment: Accepted by EMNLP 2023. Code is available at
https://github.com/menik1126/TRIG
LEGO-Prover: Neural Theorem Proving with Growing Libraries
Despite the success of large language models (LLMs), the task of theorem
proving still remains one of the hardest reasoning tasks that is far from being
fully solved. Prior methods using language models have demonstrated promising
results, but they still struggle to prove even middle school level theorems.
One common limitation of these methods is that they assume a fixed theorem
library during the whole theorem proving process. However, as we all know,
creating new useful theorems or even new theories is not only helpful but
crucial and necessary for advancing mathematics and proving harder and deeper
results. In this work, we present LEGO-Prover, which employs a growing skill
library containing verified lemmas as skills to augment the capability of LLMs
used in theorem proving. By constructing the proof modularly, LEGO-Prover
enables LLMs to utilize existing skills retrieved from the library and to
create new skills during the proving process. These skills are further evolved
(by prompting an LLM) to enrich the library on another scale. Modular and
reusable skills are constantly added to the library to enable tackling
increasingly intricate mathematical problems. Moreover, the learned library
further bridges the gap between human proofs and formal proofs by making it
easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass
rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%).
During the proving process, LEGO-Prover also manages to generate over 20,000
skills (theorems/lemmas) and adds them to the growing library. Our ablation
study indicates that these newly added skills are indeed helpful for proving
theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We
also release our code and all the generated skills
Research on the relationship between transmission efficiency and input torque of manual transmission
Based on the manual transmission of a micro car, this paper would present the analysis of the factors affecting the transmission efficiency (TE) and the calculation formula of TE. Accordingly, the calculation model of Matlab/Simulink TE would be built to figure out how TE varies with input torque. Meanwhile, a set of manual transmission test bench would be designed and used to verify the theoretical simulation results. It adopts a common DC bus energy feedback closed system which can feedback the power generated by the load motor to the grid through the DC bus so as to save the electricity and produce less pollution. Therefore, while the test bench can reflect the variation trend of TE about the manual transmission truly, it is comparatively reliable. Apart from being energy-saving, its unique versatility could definitely predict its exceptional potential. The data of TE obtained from the test bench are compared with the simulation result. It showed that the TE of bench test and simulation result are similar, though companied by less than 2% error difference which is within the allowable range. Most importantly, the bench test results proved the validity of the theoretical analysis statistically, which is of great necessity and significance to the research of TE
Frontier Materials for Adsorption of Antimony and Arsenic in Aqueous Environments: A Review
As highly toxic and carcinogenic substances, antimony and arsenic often coexist and cause compound pollution. Heavy metal pollution in water significantly threatens human health and the ecological environment. This article elaborates on the sources and hazards of compound antimony and arsenic contamination and systematically discusses the research progress of treatment technology to remove antimony and arsenic in water. Due to the advantages of simple operation, high removal efficiency, low economic cost, and renewable solid and sustainable utilization, adsorption technology for removing antimony and arsenic from sewage stand out among many treatment technologies. The adsorption performance of adsorbent materials is the key to removing antimony and arsenic in water. Therefore, this article focused on summarizing frontier adsorption materials’ characteristics, adsorption mechanism, and performance, including MOFs, COFs, graphene, and biomass materials. Then, the research and application progress of antimony and arsenic removal by frontier materials were described. The adsorption effects of various frontier adsorption materials were objectively analyzed and comparatively evaluated. Finally, the characteristics, advantages, and disadvantages of various frontier adsorption materials in removing antimony and arsenic from water were summarized to provide ideas for improving and innovating adsorption materials for water pollution treatment
Water Quality Prediction Based on LSTM and Attention Mechanism: A Case Study of the Burnett River, Australia
Prediction of water quality is a critical aspect of water pollution control and prevention. The trend of water quality can be predicted using historical data collected from water quality monitoring and management of water environment. The present study aims to develop a long short-term memory (LSTM) network and its attention-based (AT-LSTM) model to achieve the prediction of water quality in the Burnett River of Australia. The models developed in this study introduced an attention mechanism after feature extraction of water quality data in the section of Burnett River considering the effect of the sequences on the prediction results at different moments to enhance the influence of key features on the prediction results. This study provides one-step-ahead forecasting and multistep forward forecasting of dissolved oxygen (DO) of the Burnett River utilizing LSTM and AT-LSTM models and the comparison of the results. The research outcomes demonstrated that the inclusion of the attention mechanism improves the prediction performance of the LSTM model. Therefore, the AT-LSTM-based water quality forecasting model, developed in this study, demonstrated its stronger capability than the LSTM model for informing the Water Quality Improvement Plan of Queensland, Australia, to accurately predict water quality in the Burnett River
Monodisperse double-walled microspheres loaded with chitosan-p53 nanoparticles and doxorubicin for combined gene therapy and chemotherapy
10.1016/j.jconrel.2012.08.032Journal of Controlled Release1632130-135JCRE
Mechanism of drug release from double-walled PDLLA(PLGA) microspheres
10.1016/j.biomaterials.2013.02.015Biomaterials34153902-3911BIMA