5 research outputs found

    Circuit Complexity of Visual Search

    Full text link
    We study computational hardness of feature and conjunction search through the lens of circuit complexity. Let x=(x1,...,xn)x = (x_1, ... , x_n) (resp., y=(y1,...,yn)y = (y_1, ... , y_n)) be Boolean variables each of which takes the value one if and only if a neuron at place ii detects a feature (resp., another feature). We then simply formulate the feature and conjunction search as Boolean functions FTRn(x)=i=1nxi{\rm FTR}_n(x) = \bigvee_{i=1}^n x_i and CONJn(x,y)=i=1nxiyi{\rm CONJ}_n(x, y) = \bigvee_{i=1}^n x_i \wedge y_i, respectively. We employ a threshold circuit or a discretized circuit (such as a sigmoid circuit or a ReLU circuit with discretization) as our models of neural networks, and consider the following four computational resources: [i] the number of neurons (size), [ii] the number of levels (depth), [iii] the number of active neurons outputting non-zero values (energy), and [iv] synaptic weight resolution (weight). We first prove that any threshold circuit CC of size ss, depth dd, energy ee and weight ww satisfies logrk(MC)ed(logs+logw+logn)\log rk(M_C) \le ed (\log s + \log w + \log n), where rk(MC)rk(M_C) is the rank of the communication matrix MCM_C of a 2n2n-variable Boolean function that CC computes. Since CONJn{\rm CONJ}_n has rank 2n2^n, we have ned(logs+logw+logn)n \le ed (\log s + \log w + \log n). Thus, an exponential lower bound on the size of even sublinear-depth threshold circuits exists if the energy and weight are sufficiently small. Since FTRn{\rm FTR}_n is computable independently of nn, our result suggests that computational capacity for the feature and conjunction search are different. We also show that the inequality is tight up to a constant factor if ed=o(n/logn)ed = o(n/ \log n). We next show that a similar inequality holds for any discretized circuit. Thus, if we regard the number of gates outputting non-zero values as a measure for sparse activity, our results suggest that larger depth helps neural networks to acquire sparse activity

    Size and Energy of Threshold Circuits Computing Mod Functions

    Get PDF
    Abstract. Let C be a threshold logic circuit computing a Boolean function MODm : {0, 1} n → {0, 1}, where n ≥ 1 and m ≥ 2. Then C outputs "0" if the number of "1"s in an input x ∈ {0, 1} n to C is a multiple of m and, otherwise, C outputs "1." The function MOD2 is the so-called PARITY function, and MODn+1 is the OR function. Let s be the size of the circuit C, that is, C consists of s threshold gates, and let e be the energy complexity of C, that is, at most e gates in C output "1" for any input x ∈ {0, 1} n . In the paper, we prove that a very simple inequality n/(m − 1) ≤ s e holds for every circuit C computing MODm. The inequality implies that there is a tradeoff between the size s and energy complexity e of threshold circuits computing MODm, and yields a lower bound e = Ω((log n − log m)/ log log n) on e if s = O(polylog(n)). We actually obtain a general result on the so-called generalized mod function, from which the result on the ordinary mod function MODm immediately follows. Our results on threshold circuits can be extended to a more general class of circuits, called unate circuits

    Machine Learning As Tool And Theory For Computational Neuroscience

    Get PDF
    Computational neuroscience is in the midst of constructing a new framework for understanding the brain based on the ideas and methods of machine learning. This is effort has been encouraged, in part, by recent advances in neural network models. It is also driven by a recognition of the complexity of neural computation and the challenges that this poses for neuroscience’s methods. In this dissertation, I first work to describe these problems of complexity that have prompted a shift in focus. In particular, I develop machine learning tools for neurophysiology that help test whether tuning curves and other statistical models in fact capture the meaning of neural activity. Then, taking up a machine learning framework for understanding, I consider theories about how neural computation emerges from experience. Specifically, I develop hypotheses about the potential learning objectives of sensory plasticity, the potential learning algorithms in the brain, and finally the consequences for sensory representations of learning with such algorithms. These hypotheses pull from advances in several areas of machine learning, including optimization, representation learning, and deep learning theory. Each of these subfields has insights for neuroscience, offering up links for a chain of knowledge about how we learn and think. Together, this dissertation helps to further an understanding of the brain in the lens of machine learning
    corecore