3 research outputs found

    Abstract Concurrent Garbage Collection Using Program Slices on

    No full text
    We investigate reference counting in the context of a multithreaded architecture by exploiting two observations: (1) reference-counting can be performed by a transformed program slice of the mutator that isolates heap references, and (2) hardware trends indicate that microprocessors in the near future will be able to execute multiple concurrent threads on a single chip. We generate a reference-counting collector as a transformed program slice of an application and then execute this slice in parallel with the application as a “run-behind ” thread. Preliminary measurements of collector overheads are quite encouraging, showing a 25 % to 53 % space overhead to transfer garbage collection to a separate thread.

    Real Time Behavior Of Multithreaded Processors

    No full text
    We find that the processor response greatly depends on the cache configuration and main memory throughput. For simple cache design, the conflict misses reduce RT response below 55%. The only way to guarantee RT performance in multithreaded processors is to increase the memory bandwidth by employing pipelining. This way misses are serviced faster and near 100% performance can be achieved. Introduction Multithreading has been proposed as a technique for tolerating latency in computer systems. In uniprocessor systems, multithreading has been proposed by Hirata [4], Gupta [13], Eggers [11] and others, to tolerate the latency caused by a cache miss. Multithreading has also been studied both in multiprocessor systems such as the APRIL [5], the Tera Computer [6], and the HEP [14], as well as in data--flow machines such as the *T [3], the Monsoon [15], and others, to tolerate the latency caused by long memory access through interconnection networks. It will not be long before multithreaded p..
    corecore