140,498 research outputs found

    Recurrent kernel machines : computing with infinite echo state networks

    Get PDF
    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks

    Computing with infinite networks

    Get PDF
    For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones

    Efficient measurement-based quantum computing with continuous-variable systems

    Get PDF
    We present strictly efficient schemes for scalable measurement-based quantum computing using continuous-variable systems: These schemes are based on suitable non-Gaussian resource states, ones that can be prepared using interactions of light with matter systems or even purely optically. Merely Gaussian measurements such as optical homodyning as well as photon counting measurements are required, on individual sites. These schemes overcome limitations posed by Gaussian cluster states, which are known not to be universal for quantum computations of unbounded length, unless one is willing to scale the degree of squeezing with the total system size. We establish a framework derived from tensor networks and matrix product states with infinite physical dimension and finite auxiliary dimension general enough to provide a framework for such schemes. Since in the discussed schemes the logical encoding is finite-dimensional, tools of error correction are applicable. We also identify some further limitations for any continuous-variable computing scheme from which one can argue that no substantially easier ways of continuous-variable measurement-based computing than the presented one can exist.Comment: 13 pages, 3 figures, published versio

    Infinitely Complex Machines

    Get PDF
    Infinite machines (IMs) can do supertasks. A supertask is an infinite series of operations done in some finite time. Whether or not our universe contains any IMs, they are worthy of study as upper bounds on finite machines. We introduce IMs and describe some of their physical and psychological aspects. An accelerating Turing machine (an ATM) is a Turing machine that performs every next operation twice as fast. It can carry out infinitely many operations in finite time. Many ATMs can be connected together to form networks of infinitely powerful agents. A network of ATMs can also be thought of as the control system for an infinitely complex robot. We describe a robot with a dense network of ATMs for its retinas, its brain, and its motor controllers. Such a robot can perform psychological supertasks - it can perceive infinitely detailed objects in all their detail; it can formulate infinite plans; it can make infinitely precise movements. An endless hierarchy of IMs might realize a deep notion of intelligent computing everywhere

    Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks

    Full text link
    Analysing and computing with Gaussian processes arising from infinitely wide neural networks has recently seen a resurgence in popularity. Despite this, many explicit covariance functions of networks with activation functions used in modern networks remain unknown. Furthermore, while the kernels of deep networks can be computed iteratively, theoretical understanding of deep kernels is lacking, particularly with respect to fixed-point dynamics. Firstly, we derive the covariance functions of MLPs with exponential linear units and Gaussian error linear units and evaluate the performance of the limiting Gaussian processes on some benchmarks. Secondly, and more generally, we introduce a framework for analysing the fixed-point dynamics of iterated kernels corresponding to a broad range of activation functions. We find that unlike some previously studied neural network kernels, these new kernels exhibit non-trivial fixed-point dynamics which are mirrored in finite-width neural networks.Comment: 18 pages, 9 figures, 2 tables. Corrected name particle capitalisation and formattin

    Strong Nash Equilibria in Games with the Lexicographical Improvement Property

    Get PDF
    We introduce a class of finite strategic games with the property that every deviation of a coalition of players that is profitable to each of its members strictly decreases the lexicographical order of a certain function defined on the set of strategy profiles. We call this property the Lexicographical Improvement Property (LIP) and show that it implies the existence of a generalized strong ordinal potential function. We use this characterization to derive existence, efficiency and fairness properties of strong Nash equilibria. We then study a class of games that generalizes congestion games with bottleneck objectives that we call bottleneck congestion games. We show that these games possess the LIP and thus the above mentioned properties. For bottleneck congestion games in networks, we identify cases in which the potential function associated with the LIP leads to polynomial time algorithms computing a strong Nash equilibrium. Finally, we investigate the LIP for infinite games. We show that the LIP does not imply the existence of a generalized strong ordinal potential, thus, the existence of SNE does not follow. Assuming that the function associated with the LIP is continuous, however, we prove existence of SNE. As a consequence, we prove that bottleneck congestion games with infinite strategy spaces and continuous cost functions possess a strong Nash equilibrium
    corecore