5 research outputs found

    Constant-Space Population Protocols for Uniform Bipartition

    Get PDF
    In this paper, we consider a uniform bipartition problem in a population protocol model. The goal of the uniform bipartition problem is to divide a population into two groups of the same size. We study the problem under various assumptions: 1) a population with or without a base station, 2) weak or global fairness, 3) symmetric or asymmetric protocols, and 4) designated or arbitrary initial states. As a result, we completely clarify constant-space solvability of the uniform bipartition problem and, if solvable, propose space-optimal protocols

    Dynamic Size Counting in Population Protocols

    Get PDF
    The population protocol model describes a network of anonymous agents that interact asynchronously in pairs chosen at random. Each agent starts in the same initial state ss. We introduce the *dynamic size counting* problem: approximately counting the number of agents in the presence of an adversary who at any time can remove any number of agents or add any number of new agents in state ss. A valid solution requires that after each addition/removal event, resulting in population size nn, with high probability each agent "quickly" computes the same constant-factor estimate of the value log2n\log_2 n (how quickly is called the *convergence* time), which remains the output of every agent for as long as possible (the *holding* time). Since the adversary can remove agents, the holding time is necessarily finite: even after the adversary stops altering the population, it is impossible to *stabilize* to an output that never again changes. We first show that a protocol solves the dynamic size counting problem if and only if it solves the *loosely-stabilizing counting* problem: that of estimating logn\log n in a *fixed-size* population, but where the adversary can initialize each agent in an arbitrary state, with the same convergence time and holding time. We then show a protocol solving the loosely-stabilizing counting problem with the following guarantees: if the population size is nn, MM is the largest initial estimate of logn\log n, and s is the maximum integer initially stored in any field of the agents' memory, we have expected convergence time O(logn+logM)O(\log n + \log M), expected polynomial holding time, and expected memory usage of O(log2(s)+(loglogn)2)O(\log^2 (s) + (\log \log n)^2) bits. Interpreted as a dynamic size counting protocol, when changing from population size nprevn_{prev} to nnextn_{next}, the convergence time is O(lognnext+loglognprev)O(\log n_{next} + \log \log n_{prev})

    Etude de la fiabilité des algorithmes self-convergeants face aux soft-erreurs

    Get PDF
    This thesis is devoted to the study of the robustness/sensitivity of a self-converging algorithm with respect to SEU's. These phenomenon also called bit-flips which may modify the content of memory elements as the result of the silicon ionization resulting from the impact of a charged particles. This study may have a significant impact given the conditions of miniaturization that will soon have circuits with hundreds to thousands of processing cores on a single chip, this will require make the cores communicate effectively and robust manner. In this context the so-called self-converging algorithm can be used to ensure that communication between cores is reliable and without external intervention. A fault injection study of the robustness of the algorithm was performed, this algorithm was initially executed by a processor LEON3 implemented in the FPGA embedded in a specific platform test. Preliminary fault injection from a method the state of the art called CEU showed some sensitivity to SEUs of algorithm. To cope with the software changes were made and techniques for fault tolerance have been implemented in software in the program implementing the self-converging algorithm. The fault injection experiments were made to demonstrate the robustness to SEU's and potential problems of the modified algorithm. The impact of SEUs was explored on a hardware-implemented self-converging algorithm in a FPGA. The evaluation of this method was performed by fault injection at RTL level circuit. These results obtained with this method have shown a significant improvement of the robustness of the algorithm in comparison with its software version.Cette thèse est consacrée à l'étude de la robustesse/sensibilité d'un algorithme auto-convergeant face aux SEU's. Ces phénomènes appelés aussi bit-flips qui se traduit par le basculement intempestif du contenu d'un élément mémoire comme conséquence de l'ionisation produite par le passage d'une particule chargée avec le matériel. Cette étude pourra avoir un impact important vu la conjoncture de miniaturisation qui permettra bientôt de disposer de circuits avec des centaines à des milliers de cœurs de traitement sur une seule puce, pour cela il faudra faire les cœurs communiquer de manière efficace et robustes. Dans ce contexte les algorithme dits auto-convergeants peuvent être utilis afin que la communication entre les cœurs soit fiable et sans intervention extérieure. Une étude par injection de fautes de la robustesse de l'algorithme étudié a été effectuée, cet algorithme a été initialement exécuté par un processeur LEON3 implémenté dans un FPGA embarqué dans une plateforme de test spécifique. Les campagnes préliminaires d'injection de fautes issus d'une méthode de l'état de l'art appelée CEU (Code Emulated Upset) ont mis en évidence une certaine sensibilité aux SEUs de l'algorithme. Pour y faire face des modifications du logiciel ont été effectuées et des techniques de tolérance aux fautes ont été implémentés au niveau logiciel dans le programme implémentant l'algorithme. Des expériences d'injection de fautes ont été effectués pour mettre en évidence la robustesse face aux SEUs et ses potentiels « Tallons d'Achille » de l'algorithme modifié. L'impact des SEUs a été aussi exploré sur l'algorithme auto-convergeant implémenté dans une version hardware dans un FPGA. L'évaluation de cette méthodologie a été effectuée par des expériences d'injection de fautes au niveau RTL du circuit. Ces résultats obtenus avec cette méthode ont montré une amélioration significative de la robustesse de l'algorithme en comparaison avec sa version logicielle
    corecore