21,533 research outputs found
Fluid Transformers and Creative Analogies: Exploring Large Language Models' Capacity for Augmenting Cross-Domain Analogical Creativity
Cross-domain analogical reasoning is a core creative ability that can be
challenging for humans. Recent work has shown some proofs-of concept of Large
language Models' (LLMs) ability to generate cross-domain analogies. However,
the reliability and potential usefulness of this capacity for augmenting human
creative work has received little systematic exploration. In this paper, we
systematically explore LLMs capacity to augment cross-domain analogical
reasoning. Across three studies, we found: 1) LLM-generated cross-domain
analogies were frequently judged as helpful in the context of a problem
reformulation task (median 4 out of 5 helpfulness rating), and frequently (~80%
of cases) led to observable changes in problem formulations, and 2) there was
an upper bound of 25% of outputs bring rated as potentially harmful, with a
majority due to potentially upsetting content, rather than biased or toxic
content. These results demonstrate the potential utility -- and risks -- of
LLMs for augmenting cross-domain analogical creativity
Generative Transformers for Design Concept Generation
Generating novel and useful concepts is essential during the early design
stage to explore a large variety of design opportunities, which usually
requires advanced design thinking ability and a wide range of knowledge from
designers. Growing works on computer-aided tools have explored the retrieval of
knowledge and heuristics from design data. However, they only provide stimuli
to inspire designers from limited aspects. This study explores the recent
advance of the natural language generation (NLG) technique in the artificial
intelligence (AI) field to automate the early-stage design concept generation.
Specifically, a novel approach utilizing the generative pre-trained transformer
(GPT) is proposed to leverage the knowledge and reasoning from textual data and
transform them into new concepts in understandable language. Three concept
generation tasks are defined to leverage different knowledge and reasoning:
domain knowledge synthesis, problem-driven synthesis, and analogy-driven
synthesis. The experiments with both human and data-driven evaluation show good
performance in generating novel and useful concepts.Comment: Accepted by J. Comput. Inf. Sci. En
Neurocognitive Informatics Manifesto.
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given
- …