A Methodology for Requirements Analysis of AI Architecture Authoring Tools

Abstract

Authoring embodied, highly interactive virtual agents (IVAs) for robust experiences is an extremely difficult task. Current architectures for creating those agents are so complex that it takes enormous amounts of effort to craft even short experiences, with lengthier, polished experiences (e.g., Facade, Ada and Grace) often requiring person-years of effort by expert authors. However, each architecture is challenging in vastly different ways; it is impossible to propose a universal authoring solution without being too general to provide significant leverage. Instead, we present our analysis of the System-Specific Step (SSS) in the IVA authoring process, encapsulated in the case studies of three different architectures tackling a simple scenario. The case studies revealed distinctly different behaviors by each team in their SSS, resulting in the need for different authoring solutions. We iteratively proposed and discussed each team’s SSS Components and potential authoring support strategies to identify actionable software improvements. Our expectation is that other teams can perform similar analyses of their own systems ’ SSS and make authoring improvements where they are most needed. Further, our case-study approach provides a methodology for detailed comparison of the authoring affordances of different IVA architectures, providing a lens for understanding the similarities, differences and tradeoffs between architectures

    Similar works