We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is xΦ(x), where
Φ(x) the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs (x1x>0​). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks.Comment: Trimmed version of 2016 draft; add exact formul