Aiming at achieving artificial general intelligence (AGI) for Metaverse,
pretrained foundation models (PFMs), e.g., generative pretrained transformers
(GPTs), can effectively provide various AI services, such as autonomous
driving, digital twins, and AI-generated content (AIGC) for extended reality.
With the advantages of low latency and privacy-preserving, serving PFMs of
mobile AI services in edge intelligence is a viable solution for caching and
executing PFMs on edge servers with limited computing resources and GPU memory.
However, PFMs typically consist of billions of parameters that are computation
and memory-intensive for edge servers during loading and execution. In this
article, we investigate edge PFM serving problems for mobile AIGC services of
Metaverse. First, we introduce the fundamentals of PFMs and discuss their
characteristic fine-tuning and inference methods in edge intelligence. Then, we
propose a novel framework of joint model caching and inference for managing
models and allocating resources to satisfy users' requests efficiently.
Furthermore, considering the in-context learning ability of PFMs, we propose a
new metric to evaluate the freshness and relevance between examples in
demonstrations and executing tasks, namely the Age of Context (AoC). Finally,
we propose a least context algorithm for managing cached models at edge servers
by balancing the tradeoff among latency, energy consumption, and accuracy