29,950 research outputs found

    Canalizing Kauffman networks: non-ergodicity and its effect on their critical behavior

    Full text link
    Boolean Networks have been used to study numerous phenomena, including gene regulation, neural networks, social interactions, and biological evolution. Here, we propose a general method for determining the critical behavior of Boolean systems built from arbitrary ensembles of Boolean functions. In particular, we solve the critical condition for systems of units operating according to canalizing functions and present strong numerical evidence that our approach correctly predicts the phase transition from order to chaos in such systems.Comment: to be published in PR

    Descriptive complexity for neural networks via Boolean networks

    Full text link
    We investigate the descriptive complexity of a class of neural networks with unrestricted topologies and piecewise polynomial activation functions. We consider the general scenario where the running time is unlimited and floating-point numbers are used for simulating reals. We characterize a class of these neural networks with a rule-based logic for Boolean networks. In particular, we show that the sizes of the neural networks and the corresponding Boolean rule formulae are polynomially related. In fact, in the direction from Boolean rules to neural networks, the blow-up is only linear. We also analyze the delays in running times due to the translations. In the translation from neural networks to Boolean rules, the time delay is polylogarithmic in the neural network size and linear in time. In the converse translation, the time delay is linear in both factors

    Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators

    Full text link
    The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named ANDAIL\text{AND}_\text{AIL} (the AND operator Approximate for Independent Logits), ORAIL\text{OR}_\text{AIL}, and XNORAIL\text{XNOR}_\text{AIL}, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, ANDAIL\text{AND}_\text{AIL} and ORAIL\text{OR}_\text{AIL} are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning

    Space of Functions Computed by Deep-Layered Machines

    Get PDF
    We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits. Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models. Depending on the initial conditions and computing elements used, we characterize the space of functions computed at the large depth limit and show that the macroscopic entropy of Boolean functions is either monotonically increasing or decreasing with the growing depth

    Linking discrete orthogonality with dilation and translation for incomplete sigma-pi neural networks of Hopfield-type

    Get PDF
    AbstractIn this paper, we show how to extend well-known discrete orthogonality results for complete sigma-pi neural networks on bipolar coded information in presence of dilation and translation of the signals. The approach leads to a whole family of functions being able to implement any given Boolean function. Unfortunately, the complexity of such complete higher order neural network realizations increases exponentially with the dimension of the signal space. Therefore, in practise one often only considers incomplete situations accepting that not all but hopefully the most relevant information or Boolean functions can be realized. At this point, the introduced dilation and translation parameters play an essential rôle because they can be tuned appropriately in order to fit the concrete representation problem as best as possible without any significant increase of complexity. In detail, we explain our approach in context of Hopfield-type neural networks including the presentation of a new learning algorithm for such generalized networks
    • …
    corecore