The generalized linear model (GLM), where a random vector x is
observed through a noisy, possibly nonlinear, function of a linear transform
output z=Ax, arises in a range of applications such
as robust regression, binary classification, quantized compressed sensing,
phase retrieval, photon-limited imaging, and inference from neural spike
trains. When A is large and i.i.d. Gaussian, the generalized
approximate message passing (GAMP) algorithm is an efficient means of MAP or
marginal inference, and its performance can be rigorously characterized by a
scalar state evolution. For general A, though, GAMP can
misbehave. Damping and sequential-updating help to robustify GAMP, but their
effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for
additive white Gaussian noise channels. VAMP extends AMP's guarantees from
i.i.d. Gaussian A to the larger class of rotationally invariant
A. In this paper, we show how VAMP can be extended to the GLM.
Numerical experiments show that the proposed GLM-VAMP is much more robust to
ill-conditioning in A than damped GAMP