2 research outputs found
Information, Processes and Games
We survey the prospects for an Information Dynamics which can serve as the
basis for a fundamental theory of information, incorporating qualitative and
structural as well as quantitative aspects. We motivate our discussion with
some basic conceptual puzzles: how can information increase in computation, and
what is it that we are actually computing in general? Then we survey a number
of the theories which have been developed within Computer Science, as partial
exemplifications of the kind of fundamental theory which we seek: including
Domain Theory, Dynamic Logic, and Process Algebra. We look at recent work
showing new ways of combining quantitative and qualitative theories of
information, as embodied respectively by Domain Theory and Shannon Information
Theory. Then we look at Game Semantics and Geometry of Interaction, as examples
of dynamic models of logic and computation in which information flow and
interaction are made central and explicit. We conclude by looking briefly at
some key issues for future progress.Comment: Appeared in Philosophy of Information, vol. 8 of Handbook of the
Philosophy of Science, edited by Dov Gabbay and John Woods. arXiv admin note:
substantial text overlap with arXiv:quant-ph/0312044 by other author
Interlanguages and synchronic models of computation
A novel language system has given rise to promising alternatives to standard
formal and processor network models of computation. An interstring linked with
a abstract machine environment, shares sub-expressions, transfers data, and
spatially allocates resources for the parallel evaluation of dataflow. Formal
models called the a-Ram family are introduced, designed to support interstring
programming languages (interlanguages). Distinct from dataflow, graph
rewriting, and FPGA models, a-Ram instructions are bit level and execute in
situ. They support sequential and parallel languages without the space/time
overheads associated with the Turing Machine and l-calculus, enabling massive
programs to be simulated. The devices of one a-Ram model, called the Synchronic
A-Ram, are fully connected and simpler than FPGA LUT's. A compiler for an
interlanguage called Space, has been developed for the Synchronic A-Ram. Space
is MIMD. strictly typed, and deterministic. Barring memory allocation and
compilation, modules are referentially transparent. At a high level of
abstraction, modules exhibit a state transition system, aiding verification.
Data structures and parallel iteration are straightforward to implement, and
allocations of sub-processes and data transfers to resources are implicit.
Space points towards highly connected architectures called Synchronic Engines,
that scale in a GALS manner. Synchronic Engines are more general purpose than
systolic arrays and GPUs, and bypass programmability and conflict issues
associated with multicores