72,643 research outputs found

    Modelling Garbage Collection Algorithms --- Extend abstract

    Get PDF
    We show how abstract requirements of garbage collection can be captured using temporal logic. The temporal logic specification can then be used as a basis for process algebra specifications which can involve varying amounts of parallelism. We present two simple CCS specifications as an example, followed by a more complex specification of the cyclic reference counting algorithm. The verification of such algorithms is then briefly discussed

    Incremental garbage collection in massive object stores

    Get PDF
    © 2001 IEEEThere are only a few garbage collection algorithms that have been designed to operate over massive object stores. These algorithms operate at two levels, locally via incremental collection of small partitions and globally via detection of cross partition garbage, including cyclic garbage. At each level there is a choice of collection mechanism. For example, the PMOS collector employs tracing at the local level and reference counting at the global level. Another approach implemented in the Thor object database uses tracing at both levels. In this paper we present two new algorithms that both employ reference counting at the local level. One algorithm uses reference counting at the higher level and the other uses tracing at the higher level. An evaluation strategy is presented to support comparisons between these four algorithms and preliminary experiments are outlined

    Practical distributed garbage collection for networks with asynchronous clocks and message delay

    Get PDF
    Distributed garbage collection over a message passage network is discussed in this paper. Traditionally, this can be done by reference counting, which is fast but cannot reclaim cyclic structures or by graph traversal, e.g. mark-and-sweep or time stamping, which is capable of reclaiming cyclic structures but is slow. We propose a combined scheme which is fast in reclaiming acyclic garbage and guaranteed to reclaim cyclic garbage. Our scheme does not rely on synchronized clocks nor zero message delay and is thus practical.published_or_final_versio

    The effect of the number of response cycles on the behaviour of reinforced concrete elements subject to cyclic loading

    Get PDF
    The development of damage in reinforced concrete (RC) structures is a cumulative process. Some damage indices used to quantify damage make use of the number of response cycles as an Engineering Demand Parameter (EDP) relating with damage development. Other indices make use of deformation in terms of displacement or chord rotation. These functions are generally a function of whether the response is monotonic or cyclic, and are insensitive to the number of major deflection cycles leading to that state of damage. Many such relations are derived from experimental data from low-cycle fatigue tests performed on RC elements. The loading in such tests generally consists of either a monotonic increase in load or a gradually increasing cyclic load. Since damage development is a cumulative process, and hence depends on the load history, the loading pattern in low-cycle fatigue tests for assessment purposes should reflect the response of an earthquake. This paper will discuss a procedure to determine a loading history for cyclic tests, based on earthquake demands. The preliminary results of a campaign of low-cycle fatigue tests on RC elements to investigate the effect of using different load histories are also discussed

    A Cyclic Distributed Garbage Collector for Network Objects

    Get PDF
    This paper presents an algorithm for distributed garbage collection and outlines its implementation within the Network Objects system. The algorithm is based on a reference listing scheme, which is augmented by partial tracing in order to collect distributed garbage cycles. Processes may be dynamically organised into groups, according to appropriate heuristics, to reclaim distributed garbage cycles. The algorithm places no overhead on local collectors and suspends local mutators only briefly. Partial tracing of the distributed graph involves only objects thought to be part of a garbage cycle: no collaboration with other processes is required. The algorithm offers considerable flexibility, allowing expediency and fault-tolerance to be traded against completeness

    System Description for a Scalable, Fault-Tolerant, Distributed Garbage Collector

    Full text link
    We describe an efficient and fault-tolerant algorithm for distributed cyclic garbage collection. The algorithm imposes few requirements on the local machines and allows for flexibility in the choice of local collector and distributed acyclic garbage collector to use with it. We have emphasized reducing the number and size of network messages without sacrificing the promptness of collection throughout the algorithm. Our proposed collector is a variant of back tracing to avoid extensive synchronization between machines. We have added an explicit forward tracing stage to the standard back tracing stage and designed a tuned heuristic to reduce the total amount of work done by the collector. Of particular note is the development of fault-tolerant cooperation between traces and a heuristic that aggressively reduces the set of suspect objects.Comment: 47 pages, LaTe

    Comparator with noise suppression

    Get PDF
    An apparatus for generating a single pulse the first time only that a noisy cyclic signal exceeds a reference level during a half-cycle is disclosed. For the positive half of a cycle of the noisy cyclic signal, a comparator and a multivibrator produce a fixed voltage output when the noisy cyclic signal first exceeds the reference level. A multivibrator stops the production of the fixed voltage output when the noisy cyclic signal next passes the zero voltage level in the negative direction. Consequently, a single pulse is generated indicating that the signal exceeded the reference level during that half-cycle. The comparator and multi-vibrator produce pulses whenever the noisy cyclic signal exceeds the reference level during the negative half-cycle

    Synchronous Counting and Computational Algorithm Design

    Full text link
    Consider a complete communication network on nn nodes, each of which is a state machine. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are "odd" and which are "even". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates ff Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of states. This work consists of two parts. In the first part, we use computational techniques (often known as synthesis) to construct very compact deterministic algorithms for the first non-trivial case of f=1f = 1. While no algorithm exists for n<4n < 4, we show that as few as 3 states per node are sufficient for all values n4n \ge 4. Moreover, the problem cannot be solved with only 2 states per node for n=4n = 4, but there is a 2-state solution for all values n6n \ge 6. In the second part, we develop and compare two different approaches for synthesising synchronous counting algorithms. Both approaches are based on casting the synthesis problem as a propositional satisfiability (SAT) problem and employing modern SAT-solvers. The difference lies in how to solve the SAT problem: either in a direct fashion, or incrementally within a counter-example guided abstraction refinement loop. Empirical results suggest that the former technique is more efficient if we want to synthesise time-optimal algorithms, while the latter technique discovers non-optimal algorithms more quickly.Comment: 35 pages, extended and revised versio

    Energy conservation, counting statistics, and return to equilibrium

    Full text link
    We study a microscopic Hamiltonian model describing an N-level quantum system S coupled to an infinitely extended thermal reservoir R. Initially, the system S is in an arbitrary state while the reservoir is in thermal equilibrium at temperature T. Assuming that the coupled system S+R is mixing with respect to the joint thermal equilibrium state, we study the Full Counting Statistics (FCS) of the energy transfers S->R and R->S in the process of return to equilibrium. The first FCS describes the increase of the energy of the system S. It is an atomic probability measure, denoted PS,λ,tP_{S,\lambda,t}, concentrated on the set of energy differences σ(HS)σ(HS)\sigma(H_S)-\sigma(H_S) (σ(HS)\sigma(H_S) is the spectrum of the Hamiltonian of S, tt is the length of the time interval during which the measurement of the energy transfer is performed, and λ\lambda is the strength of the interaction between S and R). The second FCS, PR,λ,tP_{R,\lambda,t}, describes the decrease of the energy of the reservoir R and is typically a continuous probability measure whose support is the whole real line. We study the large time limit tt\rightarrow\infty of these two measures followed by the weak coupling limit λ0\lambda\rightarrow 0 and prove that the limiting measures coincide. This result strengthens the first law of thermodynamics for open quantum systems. The proofs are based on modular theory of operator algebras and on a representation of PR,λ,tP_{R,\lambda,t} by quantum transfer operators

    Counting Co-Cyclic Lattices

    Full text link
    There is a well-known asymptotic formula, due to W. M. Schmidt (1968) for the number of full-rank integer lattices of index at most VV in Zn\mathbb{Z}^n. This set of lattices LL can naturally be partitioned with respect to the factor group Zn/L\mathbb{Z}^n/L. Accordingly, we count the number of full-rank integer lattices LZnL \subseteq \mathbb{Z}^n such that Zn/L\mathbb{Z}^n/L is cyclic and of order at most VV, and deduce that these co-cyclic lattices are dominant among all integer lattices: their natural density is (ζ(6)k=4nζ(k))185%\left(\zeta(6) \prod_{k=4}^n \zeta(k)\right)^{-1} \approx 85\%. The problem is motivated by complexity theory, namely worst-case to average-case reductions for lattice problems
    corecore