Language models (LLMs) offer potential as a source of knowledge for agents
that need to acquire new task competencies within a performance environment. We
describe efforts toward a novel agent capability that can construct cues (or
"prompts") that result in useful LLM responses for an agent learning a new
task. Importantly, responses must not only be "reasonable" (a measure used
commonly in research on knowledge extraction from LLMs) but also specific to
the agent's task context and in a form that the agent can interpret given its
native language capacities. We summarize a series of empirical investigations
of prompting strategies and evaluate responses against the goals of targeted
and actionable responses for task learning. Our results demonstrate that
actionable task knowledge can be obtained from LLMs in support of online agent
task learning.Comment: Submitted to ACS 202