625 research outputs found
Effective Theories for Circuits and Automata
Abstracting an effective theory from a complicated process is central to the
study of complexity. Even when the underlying mechanisms are understood, or at
least measurable, the presence of dissipation and irreversibility in
biological, computational and social systems makes the problem harder. Here we
demonstrate the construction of effective theories in the presence of both
irreversibility and noise, in a dynamical model with underlying feedback. We
use the Krohn-Rhodes theorem to show how the composition of underlying
mechanisms can lead to innovations in the emergent effective theory. We show
how dissipation and irreversibility fundamentally limit the lifetimes of these
emergent structures, even though, on short timescales, the group properties may
be enriched compared to their noiseless counterparts.Comment: 11 pages, 9 figure
Statistical Mechanics of Surjective Cellular Automata
Reversible cellular automata are seen as microscopic physical models, and
their states of macroscopic equilibrium are described using invariant
probability measures. We establish a connection between the invariance of Gibbs
measures and the conservation of additive quantities in surjective cellular
automata. Namely, we show that the simplex of shift-invariant Gibbs measures
associated to a Hamiltonian is invariant under a surjective cellular automaton
if and only if the cellular automaton conserves the Hamiltonian. A special case
is the (well-known) invariance of the uniform Bernoulli measure under
surjective cellular automata, which corresponds to the conservation of the
trivial Hamiltonian. As an application, we obtain results indicating the lack
of (non-trivial) Gibbs or Markov invariant measures for "sufficiently chaotic"
cellular automata. We discuss the relevance of the randomization property of
algebraic cellular automata to the problem of approach to macroscopic
equilibrium, and pose several open questions.
As an aside, a shift-invariant pre-image of a Gibbs measure under a
pre-injective factor map between shifts of finite type turns out to be always a
Gibbs measure. We provide a sufficient condition under which the image of a
Gibbs measure under a pre-injective factor map is not a Gibbs measure. We point
out a potential application of pre-injective factor maps as a tool in the study
of phase transitions in statistical mechanical models.Comment: 50 pages, 7 figure
Constraint solving over multi-valued logics - application to digital circuits
Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a âblack-boxâ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agentâs system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP
Physical layer network coding based on compute-and-forward
In this thesis, Compute-and-Forward is considered, where the system model consists of
multiple users and a single base station. Compute-and-Forward is a type of lattice network
coding which is deemed to avoid backhaul load and is therefore an important aspect
of modern wireless communications networks. Initially we propose an implementation of
construction D into Compute-and-Forward and investigate the implementation of multilayer
lattice encoding and decoding strategies. Here we show that adopting a construction
D lattice we can implement a practical lattice decoder in Compute-and-Forward. During
this investigation and implementation of multilayer lattice encoding and decoding we discover
an error floor due to an interaction between code layers in the multilayer decoder.
We analyse and describe this interaction with mathematical expressions and give detail
using lemmas and proofs. Secondly, we demonstrate the BER performance of the system
model for unit valued channels, integer valued channels and complex integer valued channels.
We show that using the derived expressions for interaction that the decoders on each
code layer are able to indeed decode. The BER results are demonstrated for two scenarios
using zero order and second order Reed-Muller codes and first and third order Reed-Muller
codes. Finally, we extend our system model using construction D and existing conventional
decoders to include coefficient selection algorithms. We employ an exhaustive search algorithm
and analyse the throughput performance of the codes. Again, we extend this to both
our models. With the throughput of the codes we see that each layer can be successfully
decoded considering the interaction expressions. The purpose of the performance results
is to show decodability with the extension of using differing codes
Monte Carlo Simulations of Spin Glasses on Cell Broadband Engine
Several large-scale computational scientific problems require high-end computing systems to be solved. In the recent years, design of multi-core architectures delivers
on a single chip tens or hundreds Gflops of peak computing performance, with high power dissipation efficiency, and it makes available computational power previously available only on high-end multi-processor systems.
The aim of this Ph.D. thesis is to study the capability of multi-core processors for scientific programming, analyzing sustained performance, issues related to multicore
programming, data distribution, synchronization, in order to define a set of guideline rules to optimize scientific applications for this class of architectures.
As an example of multi-core processor, we consider the Cell Broadband Engine (CBE), developed by Sony, IBM and Toshiba. The CBE is one of the most powerful multi-core CPU current available, integrating eight cores and delivering a peak
performance of 200 Gflops in single precision and 100 Gflops in double precision. As case of study, we analyze the performances of CBE for Monte Carlo simulations of
the Edwards-Anderson Spin Glass model, a paradigm in theoretical and condensed matter physics, used to describe complex systems characterized by phase transitions
(such as the para-ferro transition in magnets) or model âfrustratedâ dynamics.
We descrive several strategies for the distribution of data set among on-chip and off-chip memories and propose analytic models to find out the balance between
computational and memory access time as a function of both algorithmic and architectural parameters. We use the analytic models to set the parameters of the algorithm, like for example size of data structures and scheduling of operations, to optimize execution of Monte Carlo spin glass simulations on the CBE architecture
The Phase Diagram of 1-in-3 Satisfiability Problem
We study the typical case properties of the 1-in-3 satisfiability problem,
the boolean satisfaction problem where a clause is satisfied by exactly one
literal, in an enlarged random ensemble parametrized by average connectivity
and probability of negation of a variable in a clause. Random 1-in-3
Satisfiability and Exact 3-Cover are special cases of this ensemble. We
interpolate between these cases from a region where satisfiability can be
typically decided for all connectivities in polynomial time to a region where
deciding satisfiability is hard, in some interval of connectivities. We derive
several rigorous results in the first region, and develop the
one-step--replica-symmetry-breaking cavity analysis in the second one. We
discuss the prediction for the transition between the almost surely satisfiable
and the almost surely unsatisfiable phase, and other structural properties of
the phase diagram, in light of cavity method results.Comment: 30 pages, 12 figure
Computers from plants we never made. Speculations
We discuss possible designs and prototypes of computing systems that could be
based on morphological development of roots, interaction of roots, and analog
electrical computation with plants, and plant-derived electronic components. In
morphological plant processors data are represented by initial configuration of
roots and configurations of sources of attractants and repellents; results of
computation are represented by topology of the roots' network. Computation is
implemented by the roots following gradients of attractants and repellents, as
well as interacting with each other. Problems solvable by plant roots, in
principle, include shortest-path, minimum spanning tree, Voronoi diagram,
-shapes, convex subdivision of concave polygons. Electrical properties
of plants can be modified by loading the plants with functional nanoparticles
or coating parts of plants of conductive polymers. Thus, we are in position to
make living variable resistors, capacitors, operational amplifiers,
multipliers, potentiometers and fixed-function generators. The electrically
modified plants can implement summation, integration with respect to time,
inversion, multiplication, exponentiation, logarithm, division. Mathematical
and engineering problems to be solved can be represented in plant root networks
of resistive or reaction elements. Developments in plant-based computing
architectures will trigger emergence of a unique community of biologists,
electronic engineering and computer scientists working together to produce
living electronic devices which future green computers will be made of.Comment: The chapter will be published in "Inspired by Nature. Computing
inspired by physics, chemistry and biology. Essays presented to Julian Miller
on the occasion of his 60th birthday", Editors: Susan Stepney and Andrew
Adamatzky (Springer, 2017
- âŠ