319 research outputs found
On the "principle of the quantumness", the quantumness of Relativity, and the computational grand-unification
After reviewing recently suggested operational "principles of the
quantumness", I address the problem on whether Quantum Theory (QT) and Special
Relativity (SR) are unrelated theories, or instead, if the one implies the
other. I show how SR can be indeed derived from causality of QT, within the
computational paradigm "the universe is a huge quantum computer", reformulating
QFT as a Quantum-Computational Field Theory (QCFT). In QCFT SR emerges from the
fabric of the computational network, which also naturally embeds gauge
invariance. In this scheme even the quantization rule and the Planck constant
can in principle be derived as emergent from the underlying causal tapestry of
space-time. In this way QT remains the only theory operating the huge computer
of the universe. Is QCFT only a speculative tautology (theory as simulation of
reality), or does it have a scientific value? The answer will come from Occam's
razor, depending on the mathematical simplicity of QCFT. Here I will just start
scratching the surface of QCFT, analyzing simple field theories, including
Dirac's. The number of problems and unmotivated recipes that plague QFT
strongly motivates us to undertake the QCFT project, since QCFT makes all such
problems manifest, and forces a re-foundation of QFT.Comment: To be published on AIP Proceedings of Vaxjo conference. The ideas on
Quantum-Circuit Field Theory are more recent. V4 Largely improved, with new
interesting results and concepts. Dirac equation solve
Future Summary
We are emerging from a period of consolidation in particle physics. Its
great, historic achievement was to establish the Theory of Matter. This Theory
will serve as our description of ordinary matter under ordinary conditions --
allowing for an extremely liberal definition of "ordinary -- for the
foreseeable future. Yet there are many indications, ranging from the numerical
to the semi-mystical, that a new fertile period lies before us. We will
discover compelling evidence for the unification of fundamental forces and for
new quantum dimensions (low-energy supersymmetry). We will identify new forms
of matter, which dominate the mass density of the Universe. We will achieve
much better fundamental understanding of the behavior of matter in extreme
astrophysical and cosmological environments. Lying beyond these expectations,
we can identify deep questions that seem to call for ideas outside our present
grasp. And there's still plenty of room for surprises.Comment: 25 pages, 13 EPS figures, LaTeX with BoxedEPS macros. Closing talk
delivered at the LEPfest, CERN, October 11, 2000. Email correspondence to
[email protected]
LIPIcs
Fault-tolerant distributed algorithms play an important role in many critical/high-availability applications. These algorithms are notoriously difficult to implement correctly, due to asynchronous communication and the occurrence of faults, such as the network dropping messages or computers crashing. Nonetheless there is surprisingly little language and verification support to build distributed systems based on fault-tolerant algorithms. In this paper, we present some of the challenges that a designer has to overcome to implement a fault-tolerant distributed system. Then we review different models that have been proposed to reason about distributed algorithms and sketch how such a model can form the basis for a domain-specific programming language. Adopting a high-level programming model can simplify the programmer's life and make the code amenable to automated verification, while still compiling to efficiently executable code. We conclude by summarizing the current status of an ongoing language design and implementation project that is based on this idea
Effective interprocess communication (IPC) in a real-time transputer network
The thesis describes the design and implementation of an interprocess communication (IPC)
mechanism within a real-time distributed operating system kernel (RT-DOS) which is
designed for a transputer-based network. The requirements of real-time operating systems
are examined and existing design and implementation strategies are described. Particular
attention is paid to one of the object-oriented techniques although it is concluded that these
techniques are not feasible for the chosen implementation platform. Studies of a number of
existing operating systems are reported. The choices for various aspects of operating system
design and their influence on the IPC mechanism to be used are elucidated. The actual design
choices are related to the real-time requirements and the implementation that has been
adopted is described. [Continues.
Parallel algorithms for the solution of elliptic and parabolic problems on transputer networks
This thesis is a study of the implementation of parallel algorithms for solving
elliptic and parabolic partial differential equations on a network of transputers.
The thesis commences with a general introduction to parallel processing. Here a
discussion of the various ways of introducing parallelism in computer systems and the
classification of parallel architectures is presented.
In chapter 2, the transputer architecture and the associated language OCCAM are
described. The transputer development system (TDS) is also described as well as a
short account of other transputer programming languages. Also, a brief description of
the methodologies for programming transputer networks is given. The chapter is
concluded by a detailed description of the hardware used for the research. [Continues.
The instruction of systolic array (ISA) and simulation of parallel algorithms
Systolic arrays have proved to be well suited for Very Large
Scale Integrated technology (VLSI) since they:
-Consist of a regular network of simple processing cells,
-Use local communication between the processing cells only,
-Exploit a maximal degree of parallelism.
However, systolic arrays have one main disadvantage compared with
other parallel computer architectures: they are special purpose
architectures only capable of executing one algorithm, e.g., a
systolic array designed for sorting cannot be used to form matrix
multiplication.
Several approaches have been made to make systolic arrays more
flexible, in order to be able to handle different problems on a
single systolic array.
In this thesis an alternative concept to a VLSI-architecture
the Soft-Systolic Simulation System (SSSS), is introduced and
developed as a working model of virtual machine with the power to
simulate hard systolic arrays and more general forms of concurrency
such as the SIMD and MIMD models of computation.
The virtual machine includes a processing element consisting of
a soft-systolic processor implemented in the virtual.machine language.
The processing element considered here was a very general element
which allows the choice of a wide range of arithmetic and logical
operators and allows the simulation of a wide class of algorithms
but in principle extra processing cells can be added making a library
and this library be tailored to individual needs.
The virtual machine chosen for this implementation is the
Instruction Systolic Array (ISA). The ISA has a number of interesting
features, firstly it has been used to simulate all SIMD algorithms
and many MIMD algorithms by a simple program transformation technique,
further, the ISA can also simulate the so-called wavefront processor
algorithms, as well as many hard systolic algorithms. The ISA removes
the need for the broadcasting of data which is a feature of SIMD
algorithms (limiting the size of the machine and its cycle time) and also presents a fairly simple communication structure for MIMD
algorithms.
The model of systolic computation developed from the VLSI
approach to systolic arrays is such that the processing surface is
fixed, as are the processing elements or cells by virtue of their
being embedded in the processing surface.
The VLSI approach therefore freezes instructions and hardware
relative to the movement of data with the virtual machine and softsystolic
programming retaining the constructions of VLSI for array
design features such as regularity, simplicity and local communication,
allowing the movement of instructions with respect to data. Data can
be frozen into the structure with instructions moving systolically.
Alternatively both the data and instructions can move systolically
around the virtual processors, (which are deemed fixed relative to
the underlying architecture).
The ISA is implemented in OCCAM programs whose execution and
output implicitly confirm the correctness of the design.
The soft-systolic preparation comprises of the usual operating
system facilities for the creation and modification of files during
the development of new programs and ISA processor elements. We allow
any concurrent high level language to be used to model the softsystolic
program. Consequently the Replicating Instruction Systolic
Array Language (RI SAL) was devised to provide a very primitive program
environment to the ISA but adequate for testing. RI SAL accepts
instructions in an assembler-like form, but is fairly permissive
about the format of statements, subject of course to syntax.
The RI SAL compiler is adopted to transform the soft-systolic
program description (RISAL) into a form suitable for the virtual
machine (simulating the algorithm) to run.
Finally we conclude that the principles mentioned here can form
the basis for a soft-systolic simulator using an orthogonally
connected mesh of processors. The wide range of algorithms which the
ISA can simulate make it suitable for a virtual simulating grid
Parallelisation of algorithms
Most numerical software involves performing an extremely large volume of algebraic computations. This is both costly and time consuming in respect of computer resources and, for large problems, often super-computer power is required in order for results to be obtained in a reasonable amount of time. One method whereby both the cost and time can be reduced is to use the principle "Many hands make light work", or rather, allow several computers to operate simultaneously on the code, working towards a common goal, and hopefully obtaining the required results in a fraction of the time and cost normally used. This can be achieved through the modification of the costly, time consuming code, breaking it up into separate individual code segments which may be executed concurrently on different processors. This is termed parallelisation of code. This document describes communication between sequential processes, protocols, message routing and parallelisation of algorithms. In particular, it deals with these aspects with reference to the Transputer as developed by INMOS and includes two parallelisation examples, namely parallelisation of code to study airflow and of code to determine far field patterns of antennas. This document also reports on the practical experiences with programming in parallel
Structural dynamics branch research and accomplishments
Summaries are presented of fiscal year 1989 research highlights from the Structural Dynamics Branch at NASA Lewis Research Center. Highlights from the branch's major work areas include aeroelasticity, vibration control, dynamic systems, and computation structural methods. A listing of the fiscal year 1989 branch publications is given
- …