5 research outputs found
Parallel processing and expert systems
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
Garbage Collection for General Graphs
Garbage collection is moving from being a utility to a requirement of every modern programming language. With multi-core and distributed systems, most programs written recently are heavily multi-threaded and distributed. Distributed and multi-threaded programs are called concurrent programs. Manual memory management is cumbersome and difficult in concurrent programs. Concurrent programming is characterized by multiple independent processes/threads, communication between processes/threads, and uncertainty in the order of concurrent operations. The uncertainty in the order of operations makes manual memory management of concurrent programs difficult. A popular alternative to garbage collection in concurrent programs is to use smart pointers. Smart pointers can collect all garbage only if developer identifies cycles being created in the reference graph. Smart pointer usage does not guarantee protection from memory leaks unless cycle can be detected as process/thread create them. General garbage collectors, on the other hand, can avoid memory leaks, dangling pointers, and double deletion problems in any programming environment without help from the programmer. Concurrent programming is used in shared memory and distributed memory systems. State of the art shared memory systems use a single concurrent garbage collector thread that processes the reference graph. Distributed memory systems have very few complete garbage collection algorithms and those that exist use global barriers, are centralized and do not scale well. This thesis focuses on designing garbage collection algorithms for shared memory and distributed memory systems that satisfy the following properties: concurrent, parallel, scalable, localized (decentralized), low pause time, high promptness, no global synchronization, safe, complete, and operates in linear time
Recursive structure in computer systems
PhD ThesisStructure plays a important part in the design of large systems.
Unstructured programs are difficult to design or test
and good structure has been recognized as essential to all but
the smallest programs. Similarly, concurrently executing computers
must co-operate in a structured way if an uncontrolled
growth in complexity is to be avoided. The thesis presented
here is that recursive structure can be used to organize and
simplify large programs and highly parallel computers.
In programming, naming concerns the way names are used to
identify objects. Various naming schemes are examined including
'block structured' and 'pathname' naming. A new scheme is
presented as a synthesis of these two combining most of their
advantages. Recursively structured naming is shown to be an
advantage when programs are to be de-composed or combined to
an arbitrary degree. Also, a contribution to the UNIX
United/Newcastle Connection distributed operating system
design is described. This shows how recursive naming was used
in a practical system.
Computation concerns the progress of execution in a computer.
A distinction is made between control driven computation where
the programmer has explicit control over sequencing and data
driven or demand driven computation where sequencing is implicit.
It is shown that recursively structured computation has
attractive locality properties.
The definition of a recursive structure may itself be cyclic
(self-referencing). A new resource management ('garbage collection')
algorithm is presented which can manage cyclic
structures without costs proportional to the system size. The
scheme is an extension of 'reference counting'.
Finally the need for structure in program and computer design
and the advantages of recursive structure are discussed.The Science and Engineering Research Council of Great Britain
Garbage collection in distributed systems
PhD ThesisThe provision of system-wide heap storage has a number of advantages.
However, when the technique is applied to distributed systems
automatically recovering inaccessible variables becomes a serious problem.
This thesis presents a survey of such garbage collection techniques but
finds that no existing algorithm is entirely suitable. A new, general
purpose algorithm is developed and presented which allows individual
systems to garbage collect largely independently. The effects of these
garbage collections are combined, using recursively structured control
mechanisms, to achieve garbage collection of the entire heap with the
minimum of overheads. Experimental results show that new algorithm
recovers most inaccessible variables more quickly than a straightforward
garbage collection, giving an improved memory utilisation