24 research outputs found

    Automata-theoretic protocol programming : parallel computation, threads and their interaction, optimized compilation, [at a] high level of abstraction

    Get PDF
    In the early 2000s, hardware manufacturers shifted their attention from manufacturing faster—yet purely sequential—unicore processors to manufacturing slower—yet increasingly parallel—multicore processors. In the wake of this shift, parallel programming became essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programmingand mutual exclusion may serve as a target for compilation. To demonstrate the practical feasibility of the GPL+DSL approach to protocol programming, I study the performance of the implemented compiler and its optimizations through a number of experiments, including the Java version of the NAS Parallel Benchmarks. The experimental results in these benchmarks show that, with all four optimizations in place, compiler-generated protocol code can competewith hand-crafted protocol code. workers poses no new fundamental challenges. What is new—and notoriously difficult—is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today’s GPLs, thereby addressing a number of protocol programming issues with today’s GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.Constraint automata constitute the (denotational) semantics of the DSL presented in this thesis. On top of this semantics, I use two complementary syntaxes: an existing graphical syntax (based on the coordination language Reo) and a novel textual syntax. The main contribution of this thesis, then, consists of a compiler and four of its optimizations, all formalized and proven correct at the semantic level of constraint automata, using bisimulation. In addition to these theoretical contributions, I also present an implementation of the compiler and its optimizations, which supports Java as the complementary GPL, as plugins for Eclipse. Nothing in the theory developed in this thesis depends on Java, though; any language that supports some form of threading.<br/

    Automata-theoretic protocol programming

    Get PDF
    Parallel programming has become essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programming workers poses no new fundamental challenges. What is new---and notoriously difficult---is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today's GPLs, thereby addressing a number of protocol programming issues with today's GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.UBL - phd migration 201

    Automata-Theoretic Protocol Programming (With Proofs)

    Get PDF
    In the early 2000s, hardware manufacturers shifted their attention from manufacturing faster---yet purely sequential---unicore processors to manufacturing slower---yet increasingly parallel---multicore processors. In the wake of this shift, parallel programming became essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programming workers poses no new fundamental challenges. What is new---and notoriously difficult---is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today's GPLs, thereby addressing a number of protocol programming issues with today's GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols. Constraint automata constitute the (denot

    Learning automata and sigma imperialist competitive algorithm for optimization of single and multi-objective functions

    Get PDF
    Evolutionary Algorithms (EA) consist of several heuristics which are able to solve optimisation tasks by imitating some aspects of natural evolution. Two widely-used EAs, namely Harmony Search (HS) and Imperialist Competitive Algorithm (ICA), are considered for improving single objective EA and Multi Objective EA (MOEA), respectively. HS is popular because of its speed and ICA has the ability for escaping local optima, which is an important criterion for a MOEA. In contrast, both algorithms have suffered some shortages. The HS algorithm could be trapped in local optima if its parameters are not tuned properly. This shortage causes low convergence rate and high computational time. In ICA, there is big obstacle that impedes ICA from becoming MOEA. ICA cannot be matched with crowded distance method which produces qualitative value for MOEAs, while ICA needs quantitative value to determine power of each solution. This research proposes a learnable EA, named learning automata harmony search (LAHS). The EA employs a learning automata (LA) based approach to ensure that HS parameters are learnable. This research also proposes a new MOEA based on ICA and Sigma method, named Sigma Imperialist Competitive Algorithm (SICA). Sigma method provides a mechanism to measure the solutions power based on their quantity value. The proposed LAHS and SICA algorithms are tested on wellknown single objective and multi objective benchmark, respectively. Both LAHS and MOICA show improvements in convergence rate and computational time in comparison to the well-known single EAs and MOEAs

    Incorporating Memory and Learning Mechanisms Into Meta-RaPS

    Get PDF
    Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics. Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results. The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions. To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature

    Ant colony meta-heuristics - Schemes and software framework

    Get PDF
    Master'sMASTER OF SCIENC

    Exploiting Global Constraints for Search and Propagation

    Get PDF
    Résumé Cette thèse se concentre sur la Programmation par contraintes (PPC), qui est un paradigme émergent pour résoudre des problèmes complexes d’optimisation combinatoire. Les principales contributions tournent autour du filtrage des contraintes et de la recherche; les deux sont des composantes cl´e dans la résolution de problèmes complexes à travers la PPC. D’un côté, le filtrage des contraintes permet de réduire la taille de l’espace de recherche, d’autre part, la recherche définit la manière dont cet espace sera exploré. Les progrès sur ces sujets sont essentiels pour élargir l’applicabilité de CP à des problèmes réels. En ce qui concerne le filtrage des contraintes, les contributions sont les suivantes: premièrement, on propose une amélioration sur un algorithme existant de la version relaxée d’une contrainte commune qui apparaît souvent dans les problèmes d’affectation (soft gcc). L’algorithme proposé améliore en termes de complexité soit pour la cohérence, soit pour le filtrage et en termes de facilité d’implémentation. Deuxièmement, on introduit une nouvelle contrainte (soit dure soit relaxée) et les algorithmes de filtrage pour une sous-structure récurrente qui se produit dans les problèmes d’affectation des ressources hétérogènes (hierarchical gcc). Nous montrons des résultats encourageants par rapport à une d´écomposition équivalente basée sur gcc. En ce qui concerne la recherche, nous présentons tout d’abord les algorithmes pour compter le nombre de solutions pour deux importantes familles de contraintes: les contraintes sur les occurrences, par exemple, alldifferent, symmetric alldifferent et gcc, et les contraintes de séquence admissible, telles que regular. Ces algorithmes sont à la base d’une nouvelle famille d’heuristiques de recherche, centrées sur les contraintes et basées sur le d´énombrement. Ces heuristiques extraient des informations sur le nombre de solutions des contraintes, pour guider la recherche vers des parties de l’espace de recherche qui contiennent probablement un grand nombre de solutions. Les résultats expérimentaux sur huit différents problèmes montrent une performance impressionnante par rapport à l’état de l’art des heuristiques génériques. Enfin, nous expérimentons une forme forte, déjà connue, de filtrage qui est guidée par la recherche (quick shaving). Cette technique donne des résultats soit encourageants soit mauvais lorsqu’elle est appliquée aveuglément à tous les problèmes. Nous avons introduit un estimateur simple mais très efficace pour activer ou désactiver dynamiquement le quick shaving; de tests expérimentaux ont montré des résultats très prometteurs.----------Abstract This thesis focuses on Constraint Programming (CP), that is an emergent paradigm to solve complex combinatorial optimization problems. The main contributions revolve around constraint filtering and search that are two main components of CP. On one side, constraint filtering allows to reduce the size of the search space, on the other, search defines how this space will be explored. Advances on these topics are crucial to broaden the applicability of CP to real-life problems. For what concerns constraint filtering, the contribution is twofold: we firstly propose an improvement on an existing algorithm of the relaxed version of a constraint that frequently appears in assignment problems (soft gcc). The algorithm proposed outperforms the previously known in terms of time-complexity both for the consistency check and for the filtering and in term of ease of implementiation. Secondly, we introduce a new constraint (both hard and soft version) and associated filtering algorithms for a recurrent sub-structure that occurs in assignment problems with heterogeneous resources (hierarchical gcc). We show promising results when compared to an equivalent decomposition based on gcc. For what concerns search, we introduce algorithms to count the number of solutions for two important families of constraints: occurrence counting constraints, such as alldifferent, symmetric alldifferent and gcc, and sequencing constraints, such as regular. These algorithms are the building blocks of a new family of search heuristics, called constraint-centered counting-based heuristics. They extract information about the number of solutions the individual constraints admit, to guide search towards parts of the search space that are likely to contain a high number of solutions. Experimental results on eight different problems show an impressive performance compared to other generic state-of-the-art heuristics. Finally, we experiment on an already known strong form of constraint filtering that is heuristically guided by the search (quick shaving). This technique gives mixed results when applied blindly to any problem. We introduced a simple yet very effective estimator to dynamically disable quick shaving and showed experimentally very promising results
    corecore