24 research outputs found
Automata-theoretic protocol programming : parallel computation, threads and their interaction, optimized compilation, [at a] high level of abstraction
In the early 2000s, hardware manufacturers shifted their attention from manufacturing faster—yet purely sequential—unicore processors to manufacturing slower—yet increasingly parallel—multicore processors. In the wake of this shift, parallel programming became essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programmingand mutual exclusion may serve as a target for compilation. To demonstrate the practical feasibility of the GPL+DSL approach to protocol programming, I study the performance of the implemented compiler and its optimizations through a number of experiments, including the Java version of the NAS Parallel Benchmarks. The experimental results in these benchmarks show that, with all four optimizations in place, compiler-generated protocol code can competewith hand-crafted protocol code. workers poses no new fundamental challenges. What is new—and notoriously difficult—is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today’s GPLs, thereby addressing a number of protocol programming issues with today’s GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.Constraint automata constitute the (denotational) semantics of the DSL presented in this thesis. On top of this semantics, I use two complementary syntaxes: an existing graphical syntax (based on the coordination language Reo) and a novel textual syntax. The main contribution of this thesis, then, consists of a compiler and four of its optimizations, all formalized and proven correct at the semantic level of constraint automata, using bisimulation. In addition to these theoretical contributions, I also present an implementation of the compiler and its optimizations, which supports Java as the complementary GPL, as plugins for Eclipse. Nothing in the theory developed in this thesis depends on Java, though; any language that supports some form of threading.<br/
Automata-theoretic protocol programming
Parallel programming has become essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programming workers poses no new fundamental challenges. What is new---and notoriously difficult---is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today's GPLs, thereby addressing a number of protocol programming issues with today's GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.UBL - phd migration 201
Automata-Theoretic Protocol Programming (With Proofs)
In the early 2000s, hardware manufacturers shifted their attention from manufacturing faster---yet purely sequential---unicore processors to manufacturing slower---yet increasingly parallel---multicore processors. In the wake of this shift, parallel programming became essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programming workers poses no new fundamental challenges. What is new---and notoriously difficult---is programming of protocols.
In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today's GPLs, thereby addressing a number of protocol programming issues with today's GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.
Constraint automata constitute the (denot
Learning automata and sigma imperialist competitive algorithm for optimization of single and multi-objective functions
Evolutionary Algorithms (EA) consist of several heuristics which are able to solve optimisation tasks by imitating some aspects of natural evolution. Two widely-used EAs, namely Harmony Search (HS) and Imperialist Competitive Algorithm (ICA), are considered for improving single objective EA and Multi Objective EA (MOEA), respectively. HS is popular because of its speed and ICA has the ability for escaping local optima, which is an important criterion for a MOEA. In contrast, both algorithms have suffered some shortages. The HS algorithm could be trapped in local optima if its parameters are not tuned properly. This shortage causes low convergence rate and high computational time. In ICA, there is big obstacle that impedes ICA from becoming MOEA. ICA cannot be matched with crowded distance method which produces qualitative value for MOEAs, while ICA needs quantitative value to determine power of each solution. This research proposes a learnable EA, named learning automata harmony search (LAHS). The EA employs a learning automata (LA) based approach to ensure that HS parameters are learnable. This research also proposes a new MOEA based on ICA and Sigma method, named Sigma Imperialist Competitive Algorithm (SICA). Sigma method provides a mechanism to measure the solutions power based on their quantity value. The proposed LAHS and SICA algorithms are tested on wellknown single objective and multi objective benchmark, respectively. Both LAHS and MOICA show improvements in convergence rate and computational time in comparison to the well-known single EAs and MOEAs
Incorporating Memory and Learning Mechanisms Into Meta-RaPS
Due to the rapid increase of dimensions and complexity of real life problems, it has become more difficult to find optimal solutions using only exact mathematical methods. The need to find near-optimal solutions in an acceptable amount of time is a challenge when developing more sophisticated approaches. A proper answer to this challenge can be through the implementation of metaheuristic approaches. However, a more powerful answer might be reached by incorporating intelligence into metaheuristics.
Meta-RaPS (Metaheuristic for Randomized Priority Search) is a metaheuristic that creates high quality solutions for discrete optimization problems. It is proposed that incorporating memory and learning mechanisms into Meta-RaPS, which is currently classified as a memoryless metaheuristic, can help the algorithm produce higher quality results.
The proposed Meta-RaPS versions were created by taking different perspectives of learning. The first approach taken is Estimation of Distribution Algorithms (EDA), a stochastic learning technique that creates a probability distribution for each decision variable to generate new solutions. The second Meta-RaPS version was developed by utilizing a machine learning algorithm, Q Learning, which has been successfully applied to optimization problems whose output is a sequence of actions. In the third Meta-RaPS version, Path Relinking (PR) was implemented as a post-optimization method in which the new algorithm learns the good attributes by memorizing best solutions, and follows them to reach better solutions. The fourth proposed version of Meta-RaPS presented another form of learning with its ability to adaptively tune parameters. The efficiency of these approaches motivated us to redesign Meta-RaPS by removing the improvement phase and adding a more sophisticated Path Relinking method. The new Meta-RaPS could solve even the largest problems in much less time while keeping up the quality of its solutions.
To evaluate their performance, all introduced versions were tested using the 0-1 Multidimensional Knapsack Problem (MKP). After comparing the proposed algorithms, Meta-RaPS PR and Meta-RaPS Q Learning appeared to be the algorithms with the best and worst performance, respectively. On the other hand, they could all show superior performance than other approaches to the 0-1 MKP in the literature
Recommended from our members
New variants of variable neighbourhood search for 0-1 mixed integer programming and clustering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Many real-world optimisation problems are discrete in nature. Although recent rapid developments in computer technologies are steadily increasing the speed of computations, the size of an instance of a hard discrete optimisation problem solvable in prescribed time does not increase linearly with the computer speed. This calls for the development of new solution methodologies for solving larger instances in shorter time. Furthermore, large instances of discrete optimisation problems are normally impossible to solve to optimality within a reasonable computational time/space and can only be tackled with a heuristic approach.
In this thesis the development of so called matheuristics, the heuristics which are based on the mathematical formulation of the problem, is studied and employed within the variable neighbourhood search framework. Some new variants of the variable neighbourhood searchmetaheuristic itself are suggested, which naturally emerge from exploiting the information from the mathematical programming formulation of the problem. However, those variants may also be applied to problems described by the combinatorial formulation. A unifying perspective on modern advances in local search-based metaheuristics, a so called hyper-reactive approach, is also proposed. Two NP-hard discrete optimisation problems are considered: 0-1 mixed integer programming and clustering with application to colour image quantisation. Several new heuristics for 0-1 mixed integer programming problem are developed, based on the principle of variable neighbourhood search. One set of proposed heuristics consists of improvement heuristics, which attempt to find high-quality near-optimal solutions starting from a given feasible solution. Another set consists of constructive heuristics, which attempt to find initial feasible solutions for 0-1 mixed integer programs. Finally, some variable neighbourhood search based clustering techniques are applied for solving the colour image quantisation problem. All new methods presented are compared to other algorithms recommended in literature and a comprehensive performance analysis is provided. Computational results show that the methods proposed either outperform the existing state-of-the-art methods for the problems observed, or provide comparable results.
The theory and algorithms presented in this thesis indicate that hybridisation of the CPLEX MIP solver and the VNS metaheuristic can be very effective for solving large instances of the 0-1 mixed integer programming problem. More generally, the results presented in this thesis suggest that hybridisation of exact (commercial) integer programming solvers and some metaheuristic methods is of high interest and such combinations deserve further practical and theoretical investigation. Results also show that VNS can be successfully applied to solving a colour image quantisation problem.Support from the Mathematical Institute, Serbian Academy of Sciences and Arts, are acknowledged for this research
Exploiting Global Constraints for Search and Propagation
Résumé
Cette thèse se concentre sur la Programmation par contraintes (PPC), qui est un
paradigme émergent pour résoudre des problèmes complexes d’optimisation combinatoire.
Les principales contributions tournent autour du filtrage des contraintes et de la recherche;
les deux sont des composantes cl´e dans la résolution de problèmes complexes à travers la PPC. D’un côté, le filtrage des contraintes permet de réduire la taille de l’espace de recherche,
d’autre part, la recherche définit la manière dont cet espace sera exploré. Les progrès sur ces
sujets sont essentiels pour élargir l’applicabilité de CP à des problèmes réels.
En ce qui concerne le filtrage des contraintes, les contributions sont les suivantes:
premièrement, on propose une amélioration sur un algorithme existant de la version relaxée
d’une contrainte commune qui apparaît souvent dans les problèmes d’affectation (soft gcc).
L’algorithme proposé améliore en termes de complexité soit pour la cohérence, soit pour le
filtrage et en termes de facilité d’implémentation. Deuxièmement, on introduit une nouvelle
contrainte (soit dure soit relaxée) et les algorithmes de filtrage pour une sous-structure
récurrente qui se produit dans les problèmes d’affectation des ressources hétérogènes
(hierarchical gcc). Nous montrons des résultats encourageants par rapport à une
d´écomposition équivalente basée sur gcc.
En ce qui concerne la recherche, nous présentons tout d’abord les algorithmes pour
compter le nombre de solutions pour deux importantes familles de contraintes: les contraintes
sur les occurrences, par exemple, alldifferent, symmetric alldifferent et gcc,
et les contraintes de séquence admissible, telles que regular. Ces algorithmes sont à la base
d’une nouvelle famille d’heuristiques de recherche, centrées sur les contraintes et basées sur
le d´énombrement. Ces heuristiques extraient des informations sur le nombre de solutions
des contraintes, pour guider la recherche vers des parties de l’espace de recherche qui contiennent
probablement un grand nombre de solutions. Les résultats expérimentaux sur huit
différents problèmes montrent une performance impressionnante par rapport à l’état de l’art
des heuristiques génériques.
Enfin, nous expérimentons une forme forte, déjà connue, de filtrage qui est guidée par
la recherche (quick shaving). Cette technique donne des résultats soit encourageants soit
mauvais lorsqu’elle est appliquée aveuglément à tous les problèmes. Nous avons introduit
un estimateur simple mais très efficace pour activer ou désactiver dynamiquement le quick
shaving; de tests expérimentaux ont montré des résultats très prometteurs.----------Abstract
This thesis focuses on Constraint Programming (CP), that is an emergent paradigm to
solve complex combinatorial optimization problems. The main contributions revolve around
constraint filtering and search that are two main components of CP. On one side, constraint
filtering allows to reduce the size of the search space, on the other, search defines how this
space will be explored. Advances on these topics are crucial to broaden the applicability of
CP to real-life problems.
For what concerns constraint filtering, the contribution is twofold: we firstly propose an
improvement on an existing algorithm of the relaxed version of a constraint that frequently
appears in assignment problems (soft gcc). The algorithm proposed outperforms the previously
known in terms of time-complexity both for the consistency check and for the filtering
and in term of ease of implementiation. Secondly, we introduce a new constraint (both hard
and soft version) and associated filtering algorithms for a recurrent sub-structure that occurs
in assignment problems with heterogeneous resources (hierarchical gcc). We show
promising results when compared to an equivalent decomposition based on gcc.
For what concerns search, we introduce algorithms to count the number of solutions for
two important families of constraints: occurrence counting constraints, such as alldifferent,
symmetric alldifferent and gcc, and sequencing constraints, such as regular. These algorithms
are the building blocks of a new family of search heuristics, called constraint-centered
counting-based heuristics. They extract information about the number of solutions the individual
constraints admit, to guide search towards parts of the search space that are likely to
contain a high number of solutions. Experimental results on eight different problems show
an impressive performance compared to other generic state-of-the-art heuristics.
Finally, we experiment on an already known strong form of constraint filtering that is
heuristically guided by the search (quick shaving). This technique gives mixed results when
applied blindly to any problem. We introduced a simple yet very effective estimator to
dynamically disable quick shaving and showed experimentally very promising results