45 research outputs found

    THE INTERACTION OF WAKES GENERATED BY SUBMERGED PROPAGATING OBJECTS WITH THE TURBULENT SUBSURFACE MIXED LAYER

    Get PDF
    In this study, the numeral simulations were conducted using OpenFOAM to investigate how changes in mixed layer depth (MLD), speed and depth of a submerged body (SB) affect observable signatures of the SB moving beneath a mixed layer. We studied the effect of these factors on both surface and interior temperature perturbations. This study has shown that the wake generated by an SB in the presence of a mixed layer has a greater surface temperature signature than that without it. Furthermore, the deeper the MLD, the greater the thermal signal. This is because when the mixed layer is present, the region is more weakly stratified than it would be without a mixed layer, and the reduced buoyancy force permits fluid entering the mixed layer to penetrate farther. Through variation in speed and depth of SB, we found that faster SB results in stronger turbulence, greater temperature change, and larger areas of surface wake penetration. Also, deeper SB motion levels result in greater surface temperature signals by dredging up colder water from the SB surroundings to the surface. This study confirmed the possibility of detection through surface temperature changes formed by wakes inevitably generated by submerged objects.Office of Naval Research, Washington, DC 20375Dae-wi, Republic of Korea NavyApproved for public release. Distribution is unlimited

    Improving Neural Question Generation using Answer Separation

    Full text link
    Neural question generation (NQG) is the task of generating a question from a given passage with deep neural networks. Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions. In this paper, we propose answer-separated seq2seq, which better utilizes the information from both the passage and the target answer. By replacing the target answer in the original passage with a special token, our model learns to identify which interrogative word should be used. We also propose a new module termed keyword-net, which helps the model better capture the key information in the target answer and generate an appropriate question. Experimental results demonstrate that our answer separation method significantly reduces the number of improper questions which include answers. Consequently, our model significantly outperforms previous state-of-the-art NQG models.Comment: The paper is accepted to AAAI 201

    IterCQR: Iterative Conversational Query Reformulation with Retrieval Guidance

    Full text link
    Conversational search aims to retrieve passages containing essential information to answer queries in a multi-turn conversation. In conversational search, reformulating context-dependent conversational queries into stand-alone forms is imperative to effectively utilize off-the-shelf retrievers. Previous methodologies for conversational query reformulation frequently depend on human-annotated rewrites. However, these manually crafted queries often result in sub-optimal retrieval performance and require high collection costs. To address these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human rewrites. IterCQR iteratively trains the conversational query reformulation (CQR) model by directly leveraging information retrieval (IR) signals as a reward. Our IterCQR training guides the CQR model such that generated queries contain necessary information from the previous dialogue context. Our proposed method shows state-of-the-art performance on two widely-used datasets, demonstrating its effectiveness on both sparse and dense retrievers. Moreover, IterCQR exhibits superior performance in challenging settings such as generalization on unseen datasets and low-resource scenarios

    Asking Clarification Questions to Handle Ambiguity in Open-Domain QA

    Full text link
    Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previously, Min et al. (2020) have tackled this issue by generating disambiguated questions for all possible interpretations of the ambiguous question. This can be effective, but not ideal for providing an answer to the user. Instead, we propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention. We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions, each with relevant passages, possible answers, and a clarification question. The clarification questions were efficiently created by generating them using InstructGPT and manually revising them as necessary. We then define a pipeline of tasks and design appropriate evaluation metrics. Lastly, we achieve 61.3 F1 on ambiguity detection and 40.5 F1 on clarification-based QA, providing strong baselines for future work.Comment: 15 pages, 4 figure

    Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources

    Full text link
    To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed. However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. The framework incorporates two training tasks: question-answer matching (QAM) and topic-aware dialog generation (TDG). Moreover, re-ranking is conducted during the inference phase based on the contextual relevance of the generated questions. Using our framework, we produce four ConvQA datasets by utilizing documents from multiple domains as the primary source. Through automatic evaluation using diverse metrics, as well as human evaluation, we validate that our proposed framework exhibits the ability to generate datasets of higher quality compared to the baseline dialog inpainting model.Comment: Accepted to EMNLP 2023 main conferenc
    corecore