Traditional large-scale neuroscience models and machine learning utilize
simplified models of individual neurons, relying on collective activity and
properly adjusted connections to perform complex computations. However, each
biological cortical neuron is inherently a sophisticated computational device,
as corroborated in a recent study where it took a deep artificial neural
network with millions of parameters to replicate the input-output relationship
of a detailed biophysical model of a cortical pyramidal neuron. We question the
necessity for these many parameters and introduce the Expressive Leaky Memory
(ELM) neuron, a biologically inspired, computationally expressive, yet
efficient model of a cortical neuron. Remarkably, our ELM neuron requires only
8K trainable parameters to match the aforementioned input-output relationship
accurately. We find that an accurate model necessitates multiple memory-like
hidden states and intricate nonlinear synaptic integration. To assess the
computational ramifications of this design, we evaluate the ELM neuron on
various tasks with demanding temporal structures, including a sequential
version of the CIFAR-10 classification task, the challenging Pathfinder-X task,
and a new dataset based on the Spiking Heidelberg Digits dataset. Our ELM
neuron outperforms most transformer-based models on the Pathfinder-X task with
77% accuracy, demonstrates competitive performance on Sequential CIFAR-10, and
superior performance compared to classic LSTM models on the variant of the
Spiking Heidelberg Digits dataset. These findings indicate a potential for
biologically motivated, computationally efficient neuronal models to enhance
performance in challenging machine learning tasks.Comment: 23 pages, 10 figures, 9 tables, submitted to NeurIPS 202