We study the problem of learning generalized linear models under adversarial
corruptions. We analyze a classical heuristic called the iterative trimmed
maximum likelihood estimator which is known to be effective against label
corruptions in practice. Under label corruptions, we prove that this simple
estimator achieves minimax near-optimal risk on a wide range of generalized
linear models, including Gaussian regression, Poisson regression and Binomial
regression. Finally, we extend the estimator to the more challenging setting of
label and covariate corruptions and demonstrate its robustness and optimality
in that setting as well