The choice of activation functions and their motivation is a long-standing
issue within the neural network community. Neuronal representations within
artificial neural networks are commonly understood as logits, representing the
log-odds score of presence of features within the stimulus. We derive
logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR,
and XNOR for independent probabilities. Such theories are important to
formalize more complex dendritic operations in real neurons, and these
operations can be used as activation functions within a neural network,
introducing probabilistic Boolean-logic as the core operation of the neural
network. Since these functions involve taking multiple exponents and
logarithms, they are computationally expensive and not well suited to be
directly used within neural networks. Consequently, we construct efficient
approximations named ANDAIL​ (the AND operator Approximate for
Independent Logits), ORAIL​, and XNORAIL​,
which utilize only comparison and addition operations, have well-behaved
gradients, and can be deployed as activation functions in neural networks. Like
MaxOut, ANDAIL​ and ORAIL​ are generalizations
of ReLU to two-dimensions. While our primary aim is to formalize dendritic
computations within a logit-space probabilistic-Boolean framework, we deploy
these new activation functions, both in isolation and in conjunction to
demonstrate their effectiveness on a variety of tasks including image
classification, transfer learning, abstract reasoning, and compositional
zero-shot learning