3 research outputs found
ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT
Large language models (LLMs) such as ChatGPT have recently demonstrated
significant potential in mathematical abilities, providing valuable reasoning
paradigm consistent with human natural language. However, LLMs currently have
difficulty in bridging perception, language understanding and reasoning
capabilities due to incompatibility of the underlying information flow among
them, making it challenging to accomplish tasks autonomously. On the other
hand, abductive learning (ABL) frameworks for integrating the two abilities of
perception and reasoning has seen significant success in inverse decipherment
of incomplete facts, but it is limited by the lack of semantic understanding of
logical reasoning rules and the dependence on complicated domain knowledge
representation. This paper presents a novel method (ChatABL) for integrating
LLMs into the ABL framework, aiming at unifying the three abilities in a more
user-friendly and understandable manner. The proposed method uses the strengths
of LLMs' understanding and logical reasoning to correct the incomplete logical
facts for optimizing the performance of perceptual module, by summarizing and
reorganizing reasoning rules represented in natural language format. Similarly,
perceptual module provides necessary reasoning examples for LLMs in natural
language format. The variable-length handwritten equation deciphering task, an
abstract expression of the Mayan calendar decoding, is used as a testbed to
demonstrate that ChatABL has reasoning ability beyond most existing
state-of-the-art methods, which has been well supported by comparative studies.
To our best knowledge, the proposed ChatABL is the first attempt to explore a
new pattern for further approaching human-level cognitive ability via natural
language interaction with ChatGPT
Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain Activation Maps
Over decades, neuroscience has accumulated a wealth of research results in
the text modality that can be used to explore cognitive processes.
Meta-analysis is a typical method that successfully establishes a link from
text queries to brain activation maps using these research results, but it
still relies on an ideal query environment. In practical applications, text
queries used for meta-analyses may encounter issues such as semantic redundancy
and ambiguity, resulting in an inaccurate mapping to brain images. On the other
hand, large language models (LLMs) like ChatGPT have shown great potential in
tasks such as context understanding and reasoning, displaying a high degree of
consistency with human natural language. Hence, LLMs could improve the
connection between text modality and neuroscience, resolving existing
challenges of meta-analyses. In this study, we propose a method called
Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain,
to map open-ended semantic queries to brain activation maps in data-scarce and
complex query environments. By utilizing the understanding and reasoning
capabilities of LLMs, the performance of the mapping model is optimized by
transferring text queries to semantic queries. We demonstrate that Chat2Brain
can synthesize anatomically plausible neural activation patterns for more
complex tasks of text queries.Comment: 8 pages, 4 figure
ChatRadio-Valuer: A Chat Large Language Model for Generalizable Radiology Report Generation Based on Multi-institution and Multi-system Data
Radiology report generation, as a key step in medical image analysis, is
critical to the quantitative analysis of clinically informed decision-making
levels. However, complex and diverse radiology reports with cross-source
heterogeneity pose a huge generalizability challenge to the current methods
under massive data volume, mainly because the style and normativity of
radiology reports are obviously distinctive among institutions, body regions
inspected and radiologists. Recently, the advent of large language models (LLM)
offers great potential for recognizing signs of health conditions. To resolve
the above problem, we collaborate with the Second Xiangya Hospital in China and
propose ChatRadio-Valuer based on the LLM, a tailored model for automatic
radiology report generation that learns generalizable representations and
provides a basis pattern for model adaptation in sophisticated analysts' cases.
Specifically, ChatRadio-Valuer is trained based on the radiology reports from a
single institution by means of supervised fine-tuning, and then adapted to
disease diagnosis tasks for human multi-system evaluation (i.e., chest,
abdomen, muscle-skeleton, head, and maxillofacial neck) from six different
institutions in clinical-level events. The clinical dataset utilized in this
study encompasses a remarkable total of \textbf{332,673} observations. From the
comprehensive results on engineering indicators, clinical efficacy and
deployment cost metrics, it can be shown that ChatRadio-Valuer consistently
outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and
GPT-4 et al., in terms of the diseases diagnosis from radiology reports.
ChatRadio-Valuer provides an effective avenue to boost model generalization
performance and alleviate the annotation workload of experts to enable the
promotion of clinical AI applications in radiology reports