526 research outputs found
Design of a simulation environment for laboratory management by robot organizations
This paper describes the basic concepts needed for a simulation environment capable of supporting the design of robot organizations for managing chemical, or similar, laboratories on the planned U.S. Space Station. The environment should facilitate a thorough study of the problems to be encountered in assigning the responsibility of managing a non-life-critical, but mission valuable, process to an organized group of robots. In the first phase of the work, we seek to employ the simulation environment to develop robot cognitive systems and strategies for effective multi-robot management of chemical experiments. Later phases will explore human-robot interaction and development of robot autonomy
Temporal Aspects of Smart Contracts for Financial Derivatives
Implementing smart contracts to automate the performance of high-value
over-the-counter (OTC) financial derivatives is a formidable challenge. Due to
the regulatory framework and the scale of financial risk if a contract were to
go wrong, the performance of these contracts must be enforceable in law and
there is an absolute requirement that the smart contract will be faithful to
the intentions of the parties as expressed in the original legal documentation.
Formal methods provide an attractive route for validation and assurance, and
here we present early results from an investigation of the semantics of
industry-standard legal documentation for OTC derivatives. We explain the need
for a formal representation that combines temporal, deontic and operational
aspects, and focus on the requirements for the temporal aspects as derived from
the legal text. The relevance of this work extends beyond OTC derivatives and
is applicable to understanding the temporal semantics of a wide range of legal
documentation
Functional real-time programming: the language Ruth and its semantics
Real-time systems are amongst the most safety critical systems involving computer
software and the incorrect functioning of this software can cause great damage, up to
and including the loss of life. If seems sensible therefore to write real-time software in a
way that gives us the best chance of correctly implementing specifications. Because of
the high level of functional programming languages, their semantic simplicity and their
amenability to formal reasoning and correctness preserving transformation it thus seems
natural to use a functional language for this task.
This thesis explores the problems of applying functional programming languages to
real-time by defining the real-time functional programming language Ruth.
The first part of the thesis concerns the identification of the particular problems
associated with programming real-time systems. These can broadly be stated as a
requirement that a real-time language must be able to express facts about time, a feature
we have called time expressibility.
The next stage is to provide time expressibility within a purely functional
framework. This is accomplished by the use of timestamps on inputs and outputs and by
providing a real-time clock as an input to Ruth programs.
The final major part of the work is the construction of a formal definition of the
semantics of Ruth to serve as a basis for formal reasoning and transformation. The
framework within which the formal semantics of a real-time language are defined
requires time expressibility in the same way as the real-time language itself. This is
accomplished within the framework of domain theory by the use of specialised domains
for timestamped objects, called herring-bone domains. These domains could be used as
the basis for the definition of the semantics of any real-time language
Generating Probability Distributions using Multivalued Stochastic Relay Circuits
The problem of random number generation dates back to von Neumann's work in
1951. Since then, many algorithms have been developed for generating unbiased
bits from complex correlated sources as well as for generating arbitrary
distributions from unbiased bits. An equally interesting, but less studied
aspect is the structural component of random number generation as opposed to
the algorithmic aspect. That is, given a network structure imposed by nature or
physical devices, how can we build networks that generate arbitrary probability
distributions in an optimal way? In this paper, we study the generation of
arbitrary probability distributions in multivalued relay circuits, a
generalization in which relays can take on any of N states and the logical
'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work
was done on two-state relays. We generalize these results, describing a duality
property and networks that generate arbitrary rational probability
distributions. We prove that these networks are robust to errors and design a
universal probability generator which takes input bits and outputs arbitrary
binary probability distributions
On the Treewidth of Dynamic Graphs
Dynamic graph theory is a novel, growing area that deals with graphs that
change over time and is of great utility in modelling modern wireless, mobile
and dynamic environments. As a graph evolves, possibly arbitrarily, it is
challenging to identify the graph properties that can be preserved over time
and understand their respective computability.
In this paper we are concerned with the treewidth of dynamic graphs. We focus
on metatheorems, which allow the generation of a series of results based on
general properties of classes of structures. In graph theory two major
metatheorems on treewidth provide complexity classifications by employing
structural graph measures and finite model theory. Courcelle's Theorem gives a
general tractability result for problems expressible in monadic second order
logic on graphs of bounded treewidth, and Frick & Grohe demonstrate a similar
result for first order logic and graphs of bounded local treewidth.
We extend these theorems by showing that dynamic graphs of bounded (local)
treewidth where the length of time over which the graph evolves and is observed
is finite and bounded can be modelled in such a way that the (local) treewidth
of the underlying graph is maintained. We show the application of these results
to problems in dynamic graph theory and dynamic extensions to static problems.
In addition we demonstrate that certain widely used dynamic graph classes
naturally have bounded local treewidth
The Inflation Technique for Causal Inference with Latent Variables
The problem of causal inference is to determine if a given probability
distribution on observed variables is compatible with some causal structure.
The difficult case is when the causal structure includes latent variables. We
here introduce the for tackling this problem. An
inflation of a causal structure is a new causal structure that can contain
multiple copies of each of the original variables, but where the ancestry of
each copy mirrors that of the original. To every distribution of the observed
variables that is compatible with the original causal structure, we assign a
family of marginal distributions on certain subsets of the copies that are
compatible with the inflated causal structure. It follows that compatibility
constraints for the inflation can be translated into compatibility constraints
for the original causal structure. Even if the constraints at the level of
inflation are weak, such as observable statistical independences implied by
disjoint causal ancestry, the translated constraints can be strong. We apply
this method to derive new inequalities whose violation by a distribution
witnesses that distribution's incompatibility with the causal structure (of
which Bell inequalities and Pearl's instrumental inequality are prominent
examples). We describe an algorithm for deriving all such inequalities for the
original causal structure that follow from ancestral independences in the
inflation. For three observed binary variables with pairwise common causes, it
yields inequalities that are stronger in at least some aspects than those
obtainable by existing methods. We also describe an algorithm that derives a
weaker set of inequalities but is more efficient. Finally, we discuss which
inflations are such that the inequalities one obtains from them remain valid
even for quantum (and post-quantum) generalizations of the notion of a causal
model.Comment: Minor final corrections, updated to match the published version as
closely as possibl
Thermodynamic AI and the fluctuation frontier
Many Artificial Intelligence (AI) algorithms are inspired by physics and
employ stochastic fluctuations. We connect these physics-inspired AI algorithms
by unifying them under a single mathematical framework that we call
Thermodynamic AI. Seemingly disparate algorithmic classes can be described by
this framework, for example, (1) Generative diffusion models, (2) Bayesian
neural networks, (3) Monte Carlo sampling and (4) Simulated annealing. Such
Thermodynamic AI algorithms are currently run on digital hardware, ultimately
limiting their scalability and overall potential. Stochastic fluctuations
naturally occur in physical thermodynamic systems, and such fluctuations can be
viewed as a computational resource. Hence, we propose a novel computing
paradigm, where software and hardware become inseparable. Our algorithmic
unification allows us to identify a single full-stack paradigm, involving
Thermodynamic AI hardware, that could accelerate such algorithms. We contrast
Thermodynamic AI hardware with quantum computing where noise is a roadblock
rather than a resource. Thermodynamic AI hardware can be viewed as a novel form
of computing, since it uses a novel fundamental building block. We identify
stochastic bits (s-bits) and stochastic modes (s-modes) as the respective
building blocks for discrete and continuous Thermodynamic AI hardware. In
addition to these stochastic units, Thermodynamic AI hardware employs a
Maxwell's demon device that guides the system to produce non-trivial states. We
provide a few simple physical architectures for building these devices and we
develop a formalism for programming the hardware via gate sequences. We hope to
stimulate discussion around this new computing paradigm. Beyond acceleration,
we believe it will impact the design of both hardware and algorithms, while
also deepening our understanding of the connection between physics and
intelligence.Comment: 47 pages, 18 figures, Added relevant reference
OrgML - a domain specific language for organisational decision-making
Effective decision-making based on precise understanding of an organisation is critical for modern organisations to stay competitive in a dynamic and uncertain business environment. However, the state-of-the-art technologies that are relevant in this context are not adequate to capture and quantitatively analyse complex organisations. This paper discerns the necessary information for an organisational decision-making from management viewpoint, discusses inadequacy of the existing enterprise modelling and specification techniques, proposes a domain specific language to capture the necessary information in machine processable form, and demonstrates how the collected information can be used for a simulation-based evidence-driven organisational decision-making
OrgML - a domain specific language for organisational decision-making
Effective decision-making based on precise understanding of an organisation is critical for modern organisations to stay competitive in a dynamic and uncertain business environment. However, the state-of-the-art technologies that are relevant in this context are not adequate to capture and quantitatively analyse complex organisations. This paper discerns the necessary information for an organisational decision-making from management viewpoint, discusses inadequacy of the existing enterprise modelling and specification techniques, proposes a domain specific language to capture the necessary information in machine processable form, and demonstrates how the collected information can be used for a simulation-based evidence-driven organisational decision-making
- …