Solving Attention Kernel Regression Problem via Pre-conditioner

Abstract

The attention mechanism is the key to large language models, and the attention matrix serves as an algorithmic and computational bottleneck for such a scheme. In this paper, we define two problems, motivated by designing fast algorithms for proxy of attention matrix and solving regressions against them. Given an input matrix A∈RnΓ—dA\in \mathbb{R}^{n\times d} with n≫dn\gg d and a response vector bb, we first consider the matrix exponential of the matrix A⊀AA^\top A as a proxy, and we in turn design algorithms for two types of regression problems: min⁑x∈Rdβˆ₯(A⊀A)jxβˆ’bβˆ₯2\min_{x\in \mathbb{R}^d}\|(A^\top A)^jx-b\|_2 and min⁑x∈Rdβˆ₯A(A⊀A)jxβˆ’bβˆ₯2\min_{x\in \mathbb{R}^d}\|A(A^\top A)^jx-b\|_2 for any positive integer jj. Studying algorithms for these regressions is essential, as matrix exponential can be approximated term-by-term via these smaller problems. The second proxy is applying exponential entrywise to the Gram matrix, denoted by exp⁑(AA⊀)\exp(AA^\top) and solving the regression min⁑x∈Rnβˆ₯exp⁑(AA⊀)xβˆ’bβˆ₯2\min_{x\in \mathbb{R}^n}\|\exp(AA^\top)x-b \|_2. We call this problem the attention kernel regression problem, as the matrix exp⁑(AA⊀)\exp(AA^\top) could be viewed as a kernel function with respect to AA. We design fast algorithms for these regression problems, based on sketching and preconditioning. We hope these efforts will provide an alternative perspective of studying efficient approximation of attention matrices.Comment: AISTATS 202

    Similar works

    Full text

    thumbnail-image

    Available Versions