9,734 research outputs found

    Arranging program statements for locality on the basis of neighbourhood preferences

    Get PDF
    AbstractThe gradual property of computer programs, that their successive operations preferably access data from the same memory block, is called locality. The paper deals with locality optimization, more specifically with the sequencing aspect that N operations are to be brought into sequence such that locality is maximized. We assume to be given a matrix D = [Dij] of neighbourhood preferences, where entry Dij is the smaller the higher the expected gain in locality when arranging operations oi and oj closely. The gain is supposed to have been estimated from so far accumulated but still incomplete knowledge of an overall locality optimization process. Our task consists in finding a sequencing function T : {o1 … oN} → [l … N] ⊆ R that assigns to each operation a real time at which it will be approximately carried out. The motivation for T mapping into reals instead of integers is to transfer more knowledge on the certainty of operation ordering decisions into the next step of the overall locality optimization process. The goal for T consists in minimizing an objective function that was empirically designed to approximately quantify the intuitive meaning of the degree of locality. In addition, T has to spread the values T(oi) quite evenly over the interval [l … N]. We suggest a heuristic algorithm that approximately solves the problem, and report on experiments with the algorithm and several variants of it. Briefly, the algorithm starts with a random sequencing that is iteratively improved, by alternatingly moving each T(oi) in the direction of the value that minimizes the objective function for fixed T(oj)(j ≠ i), and spreading the T(oi) over [l … N]. Experimental results indicate that our algorithm is efficient and reasonably accurate

    Distributed Concurrent Persistent Languages: An Experimental Design and Implementation

    Get PDF
    A universal persistent object store is a logical space of persistent objects whose localities span over machines reachable over networks. It provides a conceptual framework in which, on one hand, the distribution of data is transparent to application programmers and, on the other, store semantics of conventional languages is preserved. This means the manipulation of persistent objects on remote machines is both syntactically and semantically the same as in the case of local data. Consequently, many aspects of distributed programming in which computation tasks cooperate over different processors and different stores can be addressed within the confines of persistent programming. The work reported in this thesis is a logical generalization of the notion of persistence in the context of distribution. The concept of a universal persistent store is founded upon a universal addressing mechanism which augments existing addressing mechanisms. The universal addressing mechanism is realized based upon remote pointers which although containing more locality information than ordinary pointers, do not require architectural changes. Moreover, these remote pointers are transparent to the programmers. A language, Distributed PS-algol, is designed to experiment with this idea. The novel features of the language include: lightweight processes with a flavour of distribution, mutexes as the store-based synchronization primitive, and a remote procedure call mechanism as the message-based interprocess communication mechanism. Furthermore, the advantages of shared store programming and network architecture are obtained with the introduction of the programming concept of locality in an unobtrusive manner. A characteristic of the underlying addressing mechanism is that data are never copied to satisfy remote demands except where efficiency can be attained without compromising the semantics of data. A remote store operation model is described to effect remote updates. It is argued that such a choice is the most natural given that remote store operations resemble remote procedure calls

    Some Clinical Approaches in Penology

    Get PDF

    Some Clinical Approaches in Penology

    Get PDF

    Data Mining Based Hybridization of Meta-RaPS

    Get PDF
    Though metaheuristics have been frequently employed to improve the performance of data mining algorithms, the opposite is not true. This paper discusses the process of employing a data mining algorithm to improve the performance of a metaheuristic algorithm. The targeted algorithms to be hybridized are the Meta-heuristic for Randomized Priority Search (Meta-RaPS) and an algorithm used to create an Inductive Decision Tree. This hybridization focuses on using a decision tree to perform on-line tuning of the parameters in Meta-RaPS. The process makes use of the information collected during the iterative construction and improvement phases Meta-RaPS performs. The data mining algorithm is used to find a favorable parameter using the knowledge gained from previous Meta-RaPS iterations. This knowledge is then used in future Meta-RaPS iterations. The proposed concept is applied to benchmark instances of the Vehicle Routing Problem. 2014 The Authors
    • …
    corecore