8 research outputs found
Text Retrieval with Multi-Stage Re-Ranking Models
The text retrieval is the task of retrieving similar documents to a search
query, and it is important to improve retrieval accuracy while maintaining a
certain level of retrieval speed. Existing studies have reported accuracy
improvements using language models, but many of these do not take into account
the reduction in search speed that comes with increased performance. In this
study, we propose three-stage re-ranking model using model ensembles or larger
language models to improve search accuracy while minimizing the search delay.
We ranked the documents by BM25 and language models, and then re-ranks by a
model ensemble or a larger language model for documents with high similarity to
the query. In our experiments, we train the MiniLM language model on the
MS-MARCO dataset and evaluate it in a zero-shot setting. Our proposed method
achieves higher retrieval accuracy while reducing the retrieval speed decay
LARCH: Large Language Model-based Automatic Readme Creation with Heuristics
Writing a readme is a crucial aspect of software development as it plays a
vital role in managing and reusing program code. Though it is a pain point for
many developers, automatically creating one remains a challenge even with the
recent advancements in large language models (LLMs), because it requires
generating an abstract description from thousands of lines of code. In this
demo paper, we show that LLMs are capable of generating a coherent and
factually correct readmes if we can identify a code fragment that is
representative of the repository. Building upon this finding, we developed
LARCH (LLM-based Automatic Readme Creation with Heuristics) which leverages
representative code identification with heuristics and weak supervision.
Through human and automated evaluations, we illustrate that LARCH can generate
coherent and factually correct readmes in the majority of cases, outperforming
a baseline that does not rely on representative code identification. We have
made LARCH open-source and provided a cross-platform Visual Studio Code
interface and command-line interface, accessible at
https://github.com/hitachi-nlp/larch. A demo video showcasing LARCH's
capabilities is available at https://youtu.be/ZUKkh5ED-O4.Comment: This is a pre-print of a paper accepted at CIKM'23 Demo. Refer to the
DOI URL for the original publicatio
Controlling keywords and their positions in text generation
One of the challenges in text generation is to control generation as intended
by a user. Previous studies have proposed to specify the keywords that should
be included in the generated text. However, this is insufficient to generate
text which reflect the user intent. For example, placing the important keyword
beginning of the text would helps attract the reader's attention, but existing
methods do not enable such flexible control. In this paper, we tackle a novel
task of controlling not only keywords but also the position of each keyword in
the text generation. To this end, we show that a method using special tokens
can control the relative position of keywords. Experimental results on
summarization and story generation tasks show that the proposed method can
control keywords and their positions. We also demonstrate that controlling the
keyword positions can generate summary texts that are closer to the user's
intent than baseline. We release our code
An Integrated Robust Parsing using Multiple Knowledge Sources
Natural language communication with computers has been a major goal of artificial intelligence (AI). Database systems and expert systems require a flexible interface that allows users to communicate through natural language such as Japanese and English, if users are not able to communicate with the systems in artificial command languages. In order to satisfy this requirement, many natural language processing (NLP) systems have been proposed, but most of them assume that all input sentences from users are grammatically correct. However, when users communicate with the NLP system, they may put grammatically ill-formed sentences, especially in spoken language interfaces. For example, users often omit some words, change word order, or make some careless errors such as agreement errors, spelling errors, and adding of extra words. To use NLP systems in practical applications, we need to construct an NLP system having a capability of handling not only grammatically well-formed sentences but a..
ブンポウテキ フテキカクブン ショリ ノ タメノ トウゴウテキ ワクグミ
http://library.naist.jp/mylimedio/dllimedio/show.cgi?bookid=100008596&oldid=11628修士 (Master)工学 (Engineering)修第15
フクスウ ノ チシキゲン オ トウゴウテキ ニ モチイタ ゲンケン ナ シゼン ゲンゴ ショリ
https://library.naist.jp/mylimedio/dllimedio/show.cgi?bookid=100047449&oldid=95701博士 (Doctor)工学 (Engineering)博第13号甲第13号博士(工学)奈良先端科学技術大学院大
CHICOT: A Developer-Assistance Toolkit for Code Search with High-Level Contextual Information
We propose a source code search system named CHICOT (Code search with HIgh level COnText) to assist developers in reusing existing code.
While previous studies have examined code search on the basis of code-level, fine-grained specifications such as functionality, logic, or implementation, CHICOT addresses a unique mission: code search with high-level contextual information, such as the purpose or domain of a developer's project.
It achieves this feature by first extracting the context information from codebases and then considering this context during the search.
It provides a VSCode plugin for daily coding assistance, and the built-in crawler ensures up-to-date code suggestions.
The case study attests to the utility of CHICOT in real-world scenarios