4 research outputs found
Biomedical Question Answering: A Survey of Approaches and Challenges
Automatic Question Answering (QA) has been successfully applied in various
domains such as search engines and chatbots. Biomedical QA (BQA), as an
emerging QA task, enables innovative applications to effectively perceive,
access and understand complex biomedical knowledge. There have been tremendous
developments of BQA in the past two decades, which we classify into 5
distinctive approaches: classic, information retrieval, machine reading
comprehension, knowledge base and question entailment approaches. In this
survey, we introduce available datasets and representative methods of each BQA
approach in detail. Despite the developments, BQA systems are still immature
and rarely used in real-life settings. We identify and characterize several key
challenges in BQA that might lead to this issue, and discuss some potential
future directions to explore.Comment: In submission to ACM Computing Survey
InternLM2 Technical Report
The evolution of Large Language Models (LLMs) like ChatGPT and GPT-4 has
sparked discussions on the advent of Artificial General Intelligence (AGI).
However, replicating such advancements in open-source models has been
challenging. This paper introduces InternLM2, an open-source LLM that
outperforms its predecessors in comprehensive evaluations across 6 dimensions
and 30 benchmarks, long-context modeling, and open-ended subjective evaluations
through innovative pre-training and optimization techniques. The pre-training
process of InternLM2 is meticulously detailed, highlighting the preparation of
diverse data types including text, code, and long-context data. InternLM2
efficiently captures long-term dependencies, initially trained on 4k tokens
before advancing to 32k tokens in pre-training and fine-tuning stages,
exhibiting remarkable performance on the 200k ``Needle-in-a-Haystack" test.
InternLM2 is further aligned using Supervised Fine-Tuning (SFT) and a novel
Conditional Online Reinforcement Learning from Human Feedback (COOL RLHF)
strategy that addresses conflicting human preferences and reward hacking. By
releasing InternLM2 models in different training stages and model sizes, we
provide the community with insights into the model's evolution