499 research outputs found

    APICom: Automatic API Completion via Prompt Learning and Adversarial Training-based Data Augmentation

    Full text link
    Based on developer needs and usage scenarios, API (Application Programming Interface) recommendation is the process of assisting developers in finding the required API among numerous candidate APIs. Previous studies mainly modeled API recommendation as the recommendation task, which can recommend multiple candidate APIs for the given query, and developers may not yet be able to find what they need. Motivated by the neural machine translation research domain, we can model this problem as the generation task, which aims to directly generate the required API for the developer query. After our preliminary investigation, we find the performance of this intuitive approach is not promising. The reason is that there exists an error when generating the prefixes of the API. However, developers may know certain API prefix information during actual development in most cases. Therefore, we model this problem as the automatic completion task and propose a novel approach APICom based on prompt learning, which can generate API related to the query according to the prompts (i.e., API prefix information). Moreover, the effectiveness of APICom highly depends on the quality of the training dataset. In this study, we further design a novel gradient-based adversarial training method {\atpart} for data augmentation, which can improve the normalized stability when generating adversarial examples. To evaluate the effectiveness of APICom, we consider a corpus of 33k developer queries and corresponding APIs. Compared with the state-of-the-art baselines, our experimental results show that APICom can outperform all baselines by at least 40.02\%, 13.20\%, and 16.31\% in terms of the performance measures EM@1, MRR, and MAP. Finally, our ablation studies confirm the effectiveness of our component setting (such as our designed adversarial training method, our used pre-trained model, and prompt learning) in APICom.Comment: accepted in Internetware 202

    Classifying superheavy elements by machine learning

    Get PDF
    Among the 118 elements listed in the periodic table, there are nine superheavy elements (Mt, Ds, Mc, Rg, Nh, Fl, Lv, Ts, and Og) that have not yet been well studied experimentally because of their limited half-lives and production rates. How to classify these elements for further study remains an open question. For superheavy elements, although relativistic quantum-mechanical calculations for the single atoms are more accurate and reliable than those for their molecules and crystals, there is no study reported to classify elements solely based on atomic properties. By using cutting-edge machine learning techniques, we find the relationship between atomic data and classification of elements, and further identify that Mt, Ds, Mc, Rg, Lv, Ts, and Og should be metals, while Nh and Fl should be metalloids. These findings not only highlight the significance of machine learning for superheavy atoms but also challenge the conventional belief that one can determine the characteristics of an element only by looking at its position in the table

    A Stepwise, Pilot Study of Bovine Colostrum to Supplement the First Enteral Feeding in Preterm Infants (Precolos):Study Protocol and Initial Results

    Get PDF
    STUDY PROTOCOL: The optimal feeding for preterm infants during the first weeks is still debated, especially when mother’s own milk is lacking or limited. Intact bovine colostrum (BC) contains high amounts of protein, growth factors, and immuno-regulatory components that may benefit protein intake and gut maturation. We designed a pilot study to investigate the feasibility and tolerability of BC as the first nutrition for preterm infants. The study was designed into three phases (A, B, and C) and recruited infants with birth weights of 1,000–1,800 g (China) or gestational ages (GAs) of 27 + 0 to 32 + 6 weeks (Denmark). In phase A, three infants were recruited consecutively to receive BC as a supplement to standard feeding. In phase B, seven infants were recruited in parallel. In phase C (not yet complete), 40 infants will be randomized to BC or standard feeding. Feeding intolerance, growth, time to full enteral feeding, serious infections/NEC, plasma amino acid profile, blood biochemistry, and intestinal functions are assessed. This paper presents the study protocol and results from phases A and B. RESULTS: Seven Danish and five Chinese infants received 22 ± 11 and 22 ± 6 ml·kg(−1)·day(−1) BC for a mean of 7 ± 3 and 7 ± 1 days which provided 1.81 ± 0.89 and 1.83 ± 0.52 g·kg(−1)·day(−1) protein, respectively. Growth rates until 37 weeks or discharge were in the normal range (11.8 ± 0.9 and 12.9 ± 2.7 g·kg(−1)·day(−1) in Denmark and China, respectively). No clinical adverse effects were observed. Five infants showed a transient hypertyrosinemia on day 7 of life. DISCUSSION AND CONCLUSION: The three-phased study design was used to proceed with caution as this is the first trial to investigate intact BC as the first feed for preterm infants. BC supplementation appeared well tolerated and resulted in high enteral protein intake. Based on the safety evaluation of phases A and B, the randomized phase C has been initiated. When complete, the Precolos trial will document whether it is feasible to use BC as a novel, bioactive milk diet for preterm infants. Our trial paves the way for a larger randomized controlled trial on using BC as the first feed for preterm infants with insufficient access to mother’s own milk

    Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models

    Full text link
    This paper presents a comprehensive survey of ChatGPT and GPT-4, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT/GPT-4 research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.Comment: 35 pages, 3 figure
    • …
    corecore