38 research outputs found

    LARCH: Large Language Model-based Automatic Readme Creation with Heuristics

    Full text link
    Writing a readme is a crucial aspect of software development as it plays a vital role in managing and reusing program code. Though it is a pain point for many developers, automatically creating one remains a challenge even with the recent advancements in large language models (LLMs), because it requires generating an abstract description from thousands of lines of code. In this demo paper, we show that LLMs are capable of generating a coherent and factually correct readmes if we can identify a code fragment that is representative of the repository. Building upon this finding, we developed LARCH (LLM-based Automatic Readme Creation with Heuristics) which leverages representative code identification with heuristics and weak supervision. Through human and automated evaluations, we illustrate that LARCH can generate coherent and factually correct readmes in the majority of cases, outperforming a baseline that does not rely on representative code identification. We have made LARCH open-source and provided a cross-platform Visual Studio Code interface and command-line interface, accessible at https://github.com/hitachi-nlp/larch. A demo video showcasing LARCH's capabilities is available at https://youtu.be/ZUKkh5ED-O4.Comment: This is a pre-print of a paper accepted at CIKM'23 Demo. Refer to the DOI URL for the original publicatio

    Text Retrieval with Multi-Stage Re-Ranking Models

    Full text link
    The text retrieval is the task of retrieving similar documents to a search query, and it is important to improve retrieval accuracy while maintaining a certain level of retrieval speed. Existing studies have reported accuracy improvements using language models, but many of these do not take into account the reduction in search speed that comes with increased performance. In this study, we propose three-stage re-ranking model using model ensembles or larger language models to improve search accuracy while minimizing the search delay. We ranked the documents by BM25 and language models, and then re-ranks by a model ensemble or a larger language model for documents with high similarity to the query. In our experiments, we train the MiniLM language model on the MS-MARCO dataset and evaluate it in a zero-shot setting. Our proposed method achieves higher retrieval accuracy while reducing the retrieval speed decay

    Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic

    Full text link
    We study a synthetic corpus-based approach for language models (LMs) to acquire logical deductive reasoning ability. The previous studies generated deduction examples using specific sets of deduction rules. However, these rules were limited or otherwise arbitrary. This can limit the generalizability of acquired deductive reasoning ability. We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way. We empirically verify that LMs trained on the proposed corpora, which we name FLD\textbf{FLD} (F\textbf{F}ormal L\textbf{L}ogic D\textbf{D}eduction), acquire more generalizable deductive reasoning ability. Furthermore, we identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs and those on which they cannot. Finally, on the basis of these results, we discuss the future directions for applying deduction corpora or other approaches for each aspect. We release the code, data, and models

    Controlling keywords and their positions in text generation

    Full text link
    One of the challenges in text generation is to control generation as intended by a user. Previous studies have proposed to specify the keywords that should be included in the generated text. However, this is insufficient to generate text which reflect the user intent. For example, placing the important keyword beginning of the text would helps attract the reader's attention, but existing methods do not enable such flexible control. In this paper, we tackle a novel task of controlling not only keywords but also the position of each keyword in the text generation. To this end, we show that a method using special tokens can control the relative position of keywords. Experimental results on summarization and story generation tasks show that the proposed method can control keywords and their positions. We also demonstrate that controlling the keyword positions can generate summary texts that are closer to the user's intent than baseline. We release our code

    Theory of optical transitions in graphene nanoribbons

    Full text link
    Matrix elements of electron-light interactions for armchair and zigzag graphene nanoribbons are constructed analytically using a tight-binding model. The changes in wavenumber (Δn\Delta n) and pseudospin are the necessary elements if we are to understand the optical selection rule. It is shown that an incident light with a specific polarization and energy, induces an indirect transition (Δn=±1\Delta n=\pm1), which results in a characteristic peak in absorption spectra. Such a peak provides evidence that the electron standing wave is formed by multiple reflections at both edges of a ribbon. It is also suggested that the absorption of low-energy light is sensitive to the position of the Fermi energy, direction of light polarization, and irregularities in the edge. The effect of depolarization on the absorption peak is briefly discussed.Comment: 11 pages, 7 figure

    DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model

    Get PDF
    Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables. In such frameworks, linear acyclic models are typically used to model the data-generating process of variables. Recently, it was shown that use of non-Gaussianity identifies the full structure of a linear acyclic model, i.e., a causal ordering of variables and their connection strengths, without using any prior knowledge on the network structure, which is not the case with conventional methods. However, existing estimation methods are based on iterative search algorithms and may not converge to a correct solution in a finite number of steps. In this paper, we propose a new direct method to estimate a causal ordering and connection strengths based on non-Gaussianity. In contrast to the previous methods, our algorithm requires no algorithmic parameters and is guaranteed to converge to the right solution within a small fixed number of steps if the data strictly follows the model

    Active Learning for Regression via Density Power Divergence

    No full text
    corecore