95,612 research outputs found
Infinitely Complex Machines
Infinite machines (IMs) can do supertasks. A supertask is an infinite series of operations done in some finite time. Whether or not our universe contains any IMs, they are worthy of study as upper bounds on finite machines. We introduce IMs and describe some of their physical and psychological aspects. An accelerating Turing machine (an ATM) is a Turing machine that performs every next operation twice as fast. It can carry out infinitely many operations in finite time. Many ATMs can be connected together to form networks of infinitely powerful agents. A network of ATMs can also be thought of as the control system for an infinitely complex robot. We describe a robot with a dense network of ATMs for its retinas, its brain, and its motor controllers. Such a robot can perform psychological supertasks - it can perceive infinitely detailed objects in all their detail; it can formulate infinite plans; it can make infinitely precise movements. An endless hierarchy of IMs might realize a deep notion of intelligent computing everywhere
Efficient measurement-based quantum computing with continuous-variable systems
We present strictly efficient schemes for scalable measurement-based quantum
computing using continuous-variable systems: These schemes are based on
suitable non-Gaussian resource states, ones that can be prepared using
interactions of light with matter systems or even purely optically. Merely
Gaussian measurements such as optical homodyning as well as photon counting
measurements are required, on individual sites. These schemes overcome
limitations posed by Gaussian cluster states, which are known not to be
universal for quantum computations of unbounded length, unless one is willing
to scale the degree of squeezing with the total system size. We establish a
framework derived from tensor networks and matrix product states with infinite
physical dimension and finite auxiliary dimension general enough to provide a
framework for such schemes. Since in the discussed schemes the logical encoding
is finite-dimensional, tools of error correction are applicable. We also
identify some further limitations for any continuous-variable computing scheme
from which one can argue that no substantially easier ways of
continuous-variable measurement-based computing than the presented one can
exist.Comment: 13 pages, 3 figures, published versio
Solving frustrated Ising models using tensor networks
Motivated by the recent success of tensor networks to calculate the residual
entropy of spin ice and kagome Ising models, we develop a general framework to
study frustrated Ising models in terms of infinite tensor networks %, i.e.
tensor networks that can be contracted using standard algorithms for infinite
systems. This is achieved by reformulating the problem as local rules for
configurations on overlapping clusters chosen in such a way that they relieve
the frustration, i.e. that the energy can be minimized independently on each
cluster. We show that optimizing the choice of clusters, including the weight
on shared bonds, is crucial for the contractibility of the tensor networks, and
we derive some basic rules and a linear program to implement them. We
illustrate the power of the method by computing the residual entropy of a
frustrated Ising spin system on the kagome lattice with next-next-nearest
neighbour interactions, vastly outperforming Monte Carlo methods in speed and
accuracy. The extension to finite-temperature is briefly discussed
Strong Nash Equilibria in Games with the Lexicographical Improvement Property
We introduce a class of finite strategic games with the property that every
deviation of a coalition of players that is profitable to each of its members
strictly decreases the lexicographical order of a certain function defined on
the set of strategy profiles. We call this property the Lexicographical
Improvement Property (LIP) and show that it implies the existence of a
generalized strong ordinal potential function. We use this characterization to
derive existence, efficiency and fairness properties of strong Nash equilibria.
We then study a class of games that generalizes congestion games with
bottleneck objectives that we call bottleneck congestion games. We show that
these games possess the LIP and thus the above mentioned properties. For
bottleneck congestion games in networks, we identify cases in which the
potential function associated with the LIP leads to polynomial time algorithms
computing a strong Nash equilibrium. Finally, we investigate the LIP for
infinite games. We show that the LIP does not imply the existence of a
generalized strong ordinal potential, thus, the existence of SNE does not
follow. Assuming that the function associated with the LIP is continuous,
however, we prove existence of SNE. As a consequence, we prove that bottleneck
congestion games with infinite strategy spaces and continuous cost functions
possess a strong Nash equilibrium
Infinite Networks, Halting and Local Algorithms
The immediate past has witnessed an increased amount of interest in local
algorithms, i.e., constant time distributed algorithms. In a recent survey of
the topic (Suomela, ACM Computing Surveys, 2013), it is argued that local
algorithms provide a natural framework that could be used in order to
theoretically control infinite networks in finite time. We study a
comprehensive collection of distributed computing models and prove that if
infinite networks are included in the class of structures investigated, then
every universally halting distributed algorithm is in fact a local algorithm.
To contrast this result, we show that if only finite networks are allowed, then
even very weak distributed computing models can define nonlocal algorithms that
halt everywhere. The investigations in this article continue the studies in the
intersection of logic and distributed computing initiated in (Hella et al.,
PODC 2012) and (Kuusisto, CSL 2013).Comment: In Proceedings GandALF 2014, arXiv:1408.556
Recurrent kernel machines : computing with infinite echo state networks
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks
Infinite-message Interactive Function Computation in Collocated Networks
An interactive function computation problem in a collocated network is
studied in a distributed block source coding framework. With the goal of
computing a desired function at the sink, the source nodes exchange messages
through a sequence of error-free broadcasts. The infinite-message minimum
sum-rate is viewed as a functional of the joint source pmf and is characterized
as the least element in a partially ordered family of functionals having
certain convex-geometric properties. This characterization leads to a family of
lower bounds for the infinite-message minimum sum-rate and a simple optimality
test for any achievable infinite-message sum-rate. An iterative algorithm for
evaluating the infinite-message minimum sum-rate functional is proposed and is
demonstrated through an example of computing the minimum function of three
sources.Comment: 5 pages. 2 figures. This draft has been submitted to IEEE
International Symposium on Information Theory (ISIT) 201
- …