1,041 research outputs found

    Scaling and Universality of the Complexity of Analog Computation

    Full text link
    We apply a probabilistic approach to study the computational complexity of analog computers which solve linear programming problems. We analyze numerically various ensembles of linear programming problems and obtain, for each of these ensembles, the probability distribution functions of certain quantities which measure the computational complexity, known as the convergence rate, the barrier and the computation time. We find that in the limit of very large problems these probability distributions are universal scaling functions. In other words, the probability distribution function for each of these three quantities becomes, in the limit of large problem size, a function of a single scaling variable, which is a certain composition of the quantity in question and the size of the system. Moreover, various ensembles studied seem to lead essentially to the same scaling functions, which depend only on the variance of the ensemble. These results extend analytical and numerical results obtained recently for the Gaussian ensemble, and support the conjecture that these scaling functions are universal.Comment: 22 pages, latex, 12 eps fig

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    On The Robustness of a Neural Network

    Get PDF
    With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the \textit{black box} aspect of neural networks as it becomes crucial to understand their limits and capabilities. With the rise of neuromorphic hardware, it is even more critical to understand how a neural network, as a distributed system, tolerates the failures of its computing nodes, neurons, and its communication channels, synapses. Experimentally assessing the robustness of neural networks involves the quixotic venture of testing all the possible failures, on all the possible inputs, which ultimately hits a combinatorial explosion for the first, and the impossibility to gather all the possible inputs for the second. In this paper, we prove an upper bound on the expected error of the output when a subset of neurons crashes. This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case. It involves a polynomial dependency on the Lipschitz coefficient of the neurons activation function, and an exponential dependency on the depth of the layer where a failure occurs. We back up our theoretical results with experiments illustrating the extent to which our prediction matches the dependencies between the network parameters and robustness. Our results show that the robustness of neural networks to the average crash can be estimated without the need to neither test the network on all failure configurations, nor access the training set used to train the network, both of which are practically impossible requirements.Comment: 36th IEEE International Symposium on Reliable Distributed Systems 26 - 29 September 2017. Hong Kong, Chin

    The Fat-Pyramid and Universal Parallel Computation Independent of Wire Delay

    Get PDF
    This paper shows that a fat-pyramid of area Θ(A) requires only O(log A) slowdown to simulate any competing network of area A under very general conditions. The result holds regardless of the processor size (amount of attached memory) and number of processors in the competing networks as long as the limitation on total area is met. Furthermore, the result is valid regardless of the relationship between wire length and wire delay. We especially focus on elimination of the common simplifying assumption that unit time suffices to traverse a wire regardless of its length, since the assumption becomes more and more untenable as the size of parallel systems increases. This paper concentrates on simulation using transmission lines (wires along which bits can be pipelined) with the message routing schedule set up off line, but it also discusses the extension to on-line simulation. This paper also examines the capabilities of a fat-pyramid when matched against a substantially larger network and points out the surprising difficulty of doing such a comparison without the unit wire delay assumption
    • …
    corecore