219 research outputs found

    Numerical simulation of high intensity laser drilling of metals

    Get PDF
    The objective of this study is to build an effective computer program to simulate high beam intensity laser drilling of metals. The study consists of four parts. In the first part, laser history and laser drilling simulation history are reviewed. In the second part, the mathematical formulation is described, the physical model of laser drilling is discussed and the simulation method is described. In the third part, the simulation parameters are selected and simulation results are analyzed, then the calculation result is compared with a GE experiment result, which shows the simulation is successful. In the fourth part, some conclusions are collected and recommendations for future programming are made

    Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning

    Full text link
    In open-domain question-answering (ODQA), most existing questions require single-hop reasoning on commonsense. To further extend this task, we officially introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop questions with explicit reasoning steps in open-domain setting. Recently, large language models (LLMs) have found significant utility in facilitating ODQA without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts the reasoning capability of LLMs to a greater extent with manual or automated paradigms. However, existing automated methods lack of quality assurance, while manual approaches suffer from limited scalability and poor diversity, hindering the capabilities of LLMs. In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT selection and self-prompted inference via in-context learning. Extensive experiments on four multi-hop question-answering benchmarks show that our proposed SP-CoT not only significantly surpasses the previous SOTA methods on large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of small-scale (13B) LLMs. Further analysis reveals the remarkable capability of SP-CoT to elicit direct and concise intermediate reasoning steps by recalling ∼\sim50\% of intermediate answers on MuSiQue-Ans dataset.Comment: Accepted by Findings of EMNLP202

    Self-Prompting Large Language Models for Zero-Shot Open-Domain QA

    Full text link
    Open-Domain Question Answering (ODQA) aims to answer questions without explicitly providing specific background documents. This task becomes notably challenging in a zero-shot setting where no data is available to train tailored retrieval-reader models. While recent Large Language Models (LLMs) like GPT-3 have demonstrated their effectiveness in zero-shot ODQA using direct prompting methods, these methods still fall short of fully harnessing the potential of LLMs when implicitly invoked. In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge encoded in the parameters of LLMs and their strong instruction understanding abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo QA pairs with background passages and explanations entirely from scratch. These generated elements are then utilized for in-context learning. Experimental results show that our method significantly surpasses previous state-of-the-art zero-shot methods on three widely-used ODQA datasets and even achieves comparable performance with various customized fine-tuned models on full training data. Our code is available at https://github.com/lockon-n/self-prompting.Comment: NAACL 202

    Testing Normal Means: The Reconcilability of the P

    Get PDF
    The problem of reconciling the frequentist and Bayesian evidence in testing statistical hypotheses has been extensively studied in the literature. Most of the existing work considers cases without the nuisance parameters which is not the frequently encountered situation since the presence of the nuisance parameters is very common in practice. In this paper, we consider the reconcilability of the Bayesian evidence against the null hypothesis H0 in terms of the posterior probability of H0 being true and the frequentist evidence against H0 in terms of the P value in testing normal means where the nuisance parameters are present. The reconcilability of evidence can be obtained both for testing a normal mean and for the Behrens-Fisher problem

    A Lactate Fermentation Mutant of Toxoplasma Stimulates Protective Immunity Against Acute and Chronic Toxoplasmosis

    No full text
    Toxoplasma gondii is an important zoonotic pathogen infecting one-third of the world’s population and numerous animals, causing significant healthcare burden and socioeconomic problems. Vaccination is an efficient way to reduce global sero-prevalence, however, ideal vaccines are not yet available. We recently discovered that the Toxoplasma mutant lacking both lactate dehydrogenases LDH1 and LDH2 (Δldh) grew well in vitro but was unable to propagate in mice, making it a good live vaccine candidate. Here, we tested the protection efficacy of ME49 Δldh using a mouse model. Vaccinated mice were efficiently protected from the lethal challenge of a variety of wild-type strains, including type 1 strain RH, type 2 strain ME49, type 3 strain VEG, and a field isolate of Chinese 1. The protection efficacies of a single vaccination were nearly 100% for most cases and it worked well against the challenges of both tachyzoites and tissue cysts. Re-challenging parasites were unable to propagate in vaccinated mice, nor did they make tissue cysts. High levels of Toxoplasma-specific IgG were produced 30 days after immunization and stayed high during the whole tests (at least 125 days). However, passive immunization of naïve mice with sera from vaccinated mice did reduce parasite propagation, but the overall protection against parasite infections was rather limited. On the other hand, Δldh immunization evoked elevated levels of Th1 cytokines like INF-γ and IL-12, at early time points. In addition, splenocytes extracted from immunized mice were able to induce quick and robust INF-γ and other pro-inflammatory cytokine production upon T. gondii antigen stimulation. Together these results suggest that cellular immune responses are the main contributors to the protective immunity elicited by Δldh vaccination, and humoral immunity also contributes partially. We also generated uracil auxotrophic mutants in ME49 and compared their immune protection efficiencies to the Δldh mutants. The results showed that these two types of mutants have similar properties as live vaccine candidates. Taken together, these results suggest that mutants lacking LDH were severely attenuated in virulence but were able to induce strong anti-toxoplasma immune responses, therefore are good candidates for live vaccines

    Task-specific Objectives of Pre-trained Language Models for Dialogue Adaptation

    Full text link
    Pre-trained Language Models (PrLMs) have been widely used as backbones in lots of Natural Language Processing (NLP) tasks. The common process of utilizing PrLMs is first pre-training on large-scale general corpora with task-independent LM training objectives, then fine-tuning on task datasets with task-specific training objectives. Pre-training in a task-independent way enables the models to learn language representations, which is universal to some extent, but fails to capture crucial task-specific features in the meantime. This will lead to an incompatibility between pre-training and fine-tuning. To address this issue, we introduce task-specific pre-training on in-domain task-related corpora with task-specific objectives. This procedure is placed between the original two stages to enhance the model understanding capacity of specific tasks. In this work, we focus on Dialogue-related Natural Language Processing (DrNLP) tasks and design a Dialogue-Adaptive Pre-training Objective (DAPO) based on some important qualities for assessing dialogues which are usually ignored by general LM pre-training objectives. PrLMs with DAPO on a large in-domain dialogue corpus are then fine-tuned for downstream DrNLP tasks. Experimental results show that models with DAPO surpass those with general LM pre-training objectives and other strong baselines on downstream DrNLP tasks
    • …
    corecore