63 research outputs found
Straight-line instruction sequence completeness for total calculation on cancellation meadows
A combination of program algebra with the theory of meadows is designed
leading to a theory of computation in algebraic structures which use in
addition to a zero test and copying instructions the instruction set . It is proven that total functions on cancellation
meadows can be computed by straight-line programs using at most 5 auxiliary
variables. A similar result is obtained for signed meadows.Comment: 24 page
Instruction sequences for the production of processes
Single-pass instruction sequences under execution are considered to produce
behaviours to be controlled by some execution environment. Threads as
considered in thread algebra model such behaviours: upon each action performed
by a thread, a reply from its execution environment determines how the thread
proceeds. Threads in turn can be looked upon as producing processes as
considered in process algebra. We show that, by apposite choice of basic
instructions, all processes that can only be in a finite number of states can
be produced by single-pass instruction sequences.Comment: 23 pages; acknowledgement corrected, reference update
An Application Specific Informal Logic for Interest Prohibition Theory
Interest prohibition theory concerns theoretical aspects of interest
prohibition. We attempt to lay down some aspects of interest prohibition theory
wrapped in a larger framework of informal logic. The reason for this is that
interest prohibition theory has to deal with a variety of arguments which is so
wide that a limitation to so-called correct arguments in advance is
counterproductive. We suggest that an application specific informal logic must
be developed for dealing with the principles of interest prohibition theory.Comment: 8 page
A progression ring for interfaces of instruction sequences, threads, and services
We define focus-method interfaces and some connections between such
interfaces and instruction sequences, giving rise to instruction sequence
components. We provide a flexible and practical notation for interfaces using
an abstract datatype specification comparable to that of basic process algebra
with deadlock. The structures thus defined are called progression rings. We
also define thread and service components. Two types of composition of
instruction sequences or threads and services (called `use' and `apply') are
lifted to the level of components.Comment: 12 page
Mechanistic Behavior of Single-Pass Instruction Sequences
Earlier work on program and thread algebra detailed the functional,
observable behavior of programs under execution. In this article we add the
modeling of unobservable, mechanistic processing, in particular processing due
to jump instructions. We model mechanistic processing preceding some further
behavior as a delay of that behavior; we borrow a unary delay operator from
discrete time process algebra. We define a mechanistic improvement ordering on
threads and observe that some threads do not have an optimal implementation.Comment: 12 page
Periodic Single-Pass Instruction Sequences
A program is a finite piece of data that produces a (possibly infinite)
sequence of primitive instructions. From scratch we develop a linear notation
for sequential, imperative programs, using a familiar class of primitive
instructions and so-called repeat instructions, a particular type of control
instructions. The resulting mathematical structure is a semigroup. We relate
this set of programs to program algebra (PGA) and show that a particular
subsemigroup is a carrier for PGA by providing axioms for single-pass
congruence, structural congruence, and thread extraction. This subsemigroup
characterizes periodic single-pass instruction sequences and provides a direct
basis for PGA's toolset.Comment: 16 pages, 3 tables, New titl
Interface groups and financial transfer architectures
Analytic execution architectures have been proposed by the same authors as a
means to conceptualize the cooperation between heterogeneous collectives of
components such as programs, threads, states and services. Interface groups
have been proposed as a means to formalize interface information concerning
analytic execution architectures. These concepts are adapted to organization
architectures with a focus on financial transfers. Interface groups (and
monoids) now provide a technique to combine interface elements into interfaces
with the flexibility to distinguish between directions of flow dependent on
entity naming.
The main principle exploiting interface groups is that when composing a
closed system of a collection of interacting components, the sum of their
interfaces must vanish in the interface group modulo reflection. This certainly
matters for financial transfer interfaces.
As an example of this, we specify an interface group and within it some
specific interfaces concerning the financial transfer architecture for a part
of our local academic organization.
Financial transfer interface groups arise as a special case of more general
service architecture interfaces.Comment: 22 page
Turing Impossibility Properties for Stack Machine Programming
The strong, intermediate, and weak Turing impossibility properties are
introduced. Some facts concerning Turing impossibility for stack machine
programming are trivially adapted from previous work. Several intriguing
questions are raised about the Turing impossibility properties concerning
different method interfaces for stack machine programming.Comment: arXiv admin note: substantial text overlap with arXiv:0910.556
- β¦