444 research outputs found

    Design Patterns in Beeping Algorithms

    Get PDF
    We consider networks of processes which interact with beeps. In the basic model defined by Cornejo and Kuhn, which we refer to as the BL variant, processes can choose in each round either to beep or to listen. Those who beep are unable to detect simultaneous beeps. Those who listen can only distinguish between silence and the presence of at least one beep. Stronger variants exist where the nodes can also detect collision while they are beeping (B_{cd}L) or listening (BL_{cd}), or both (B_{cd}L_{cd}). Beeping models are weak in essence and even simple tasks are difficult or unfeasible with them. This paper starts with a discussion on generic building blocks (design patterns) which seem to occur frequently in the design of beeping algorithms. They include multi-slot phases: the fact of dividing the main loop into a number of specialised slots; exclusive beeps: having a single node beep at a time in a neighbourhood (within one or two hops); adaptive probability: increasing or decreasing the probability of beeping to produce more exclusive beeps; internal (resp. peripheral) collision detection: for detecting collision while beeping (resp. listening); and emulation of collision detection: for enabling this feature when it is not available as a primitive. We then provide algorithms for a number of basic problems, including colouring, 2-hop colouring, degree computation, 2-hop MIS, and collision detection (in BL). Using the patterns, we formulate these algorithms in a rather concise and elegant way. Their analyses (in the full version) are more technical, e.g. one of them relies on a Martingale technique with non-independent variables; another improves that of the MIS algorithm (P. Jeavons et al.) by getting rid of a gigantic constant (the asymptotic order was already optimal). Finally, we study the relative power of several variants of beeping models. In particular, we explain how every Las Vegas algorithm with collision detection can be converted, through emulation, into a Monte Carlo algorithm without, at the cost of a logarithmic slowdown. We prove that this slowdown is optimal up to a constant factor by giving a matching lower bound

    Generalised Pattern Matching Revisited

    Get PDF
    In the problem of Generalised Pattern Matching (GPM)\texttt{Generalised Pattern Matching}\ (\texttt{GPM}) [STOC'94, Muthukrishnan and Palem], we are given a text TT of length nn over an alphabet ΣT\Sigma_T, a pattern PP of length mm over an alphabet ΣP\Sigma_P, and a matching relationship ⊆ΣT×ΣP\subseteq \Sigma_T \times \Sigma_P, and must return all substrings of TT that match PP (reporting) or the number of mismatches between each substring of TT of length mm and PP (counting). In this work, we improve over all previously known algorithms for this problem for various parameters describing the input instance: * D \mathcal{D}\, being the maximum number of characters that match a fixed character, * S \mathcal{S}\, being the number of pairs of matching characters, * I \mathcal{I}\, being the total number of disjoint intervals of characters that match the mm characters of the pattern PP. At the heart of our new deterministic upper bounds for D \mathcal{D}\, and S \mathcal{S}\, lies a faster construction of superimposed codes, which solves an open problem posed in [FOCS'97, Indyk] and can be of independent interest. To conclude, we demonstrate first lower bounds for GPM\texttt{GPM}. We start by showing that any deterministic or Monte Carlo algorithm for GPM\texttt{GPM} must use Ω(S)\Omega(\mathcal{S}) time, and then proceed to show higher lower bounds for combinatorial algorithms. These bounds show that our algorithms are almost optimal, unless a radically new approach is developed

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    Software defined wireless backhauling for 5G networks

    Get PDF
    Some of the important elements to guarantee a network?s minimum level of performance are: i) using an efficient routing of the data traffic and, ii) a good resource allocation strategy. This project proposes tools to optimise these elements in an IEEE 802.11ac-based wireless backhaul network considering the constraints derived from an implementation in a software defined network. These tools have been designed using convex optimisation?s theory in order to provide an optimal solution that ensures a circuit mode routing where the impact in higher and lower layers of the network is considered. Additionally, the traffic dynamics of the network is controlled by a sensitivity analysis of the convex problem using the Lagrange multipliers to adapt the solution to the changes produced by the evolution of the traffic. Finally, results obtained using the proposed solutions show an improved performance in bit rate and end-to-end delay with respect to typical routing algorithms for simple and complex network deployments.Algunos elementos importantes para asegurar unos niveles mínimos de rendimiento en una red son: i) utilizar un enrutamiento eficiente del tráfico de datos y, ii) una buena estrategia en la asignación de recursos. Este proyecto propone herramientas para optimizar estos elementos en una red de backhaul inalámbrica basada en el protocolo IEEE 802.11ac considerando las restricciones derivadas de una implementación en una software defined network (red definida por software). Estas herramientas han sido diseñadas utilizando la teoría de optimización convexa para proponer una solución óptima que asegure un enrutamiento en modo circuito en el que se considere el impacto en capas altas y bajas de la red. Además, la dinámica del tráfico de la red se controla mediante un análisis se sensibilidad del problema convexo utilizando los multiplicadores de Lagrange para adaptar la solución a cambios de la red producidos por la evolución del tráfico. Finalmente, los resultados obtenidos a partir de las soluciones propuestas demuestran un mejor rendimiento en bit rate y latencia extremo a extremo respecto a algoritmos de enrutamiento típicos tanto en despliegues de redes sencillas como más complejas.Alguns elements importants per assegurar uns nivells mínims de rendiment en una xarxa són: i) utilitzar un encaminament eficient del trànsit de dades i, ii) una bona estratègia en l'assignació de recursos. Aquest projecte proposa eines per optimitzar aquests elements en una xarxa de backhaul sense fils basada en el protocol IEEE 802.11ac considerant les restriccions derivades d'una implementació en una software defined network (xarxa definida per software). Aquestes eines han estat dissenyades utilitzant la teoria d'optimització convexa per tal de proposar una solució òptima que asseguri un encaminament en mode circuit on es consideri l'impacte en capes altes i baixes de la xarxa. A més, la dinàmica del trànsit de la xarxa es controla mitjançant una anàlisi de sensibilitat del problema convex utilitzant els multiplicadors de Lagrange per adaptar la solució a canvis de la xarxa produïts per l'evolució del trànsit. Finalment, els resultats obtinguts a partir de les solucions proposades demostren un millor rendiment en bit rate i latència extrem a extrem respecte a algoritmes d'encaminament típics tant en desplegaments de xarxes senzilles com més complexes

    Algorithmic and enumerative aspects of the Moser-Tardos distribution

    Full text link
    Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large
    • …
    corecore