6 research outputs found

    Game theoretic approach to medium access control in wireless networks

    Get PDF
    Wireless networking is fast becoming the primary method for people to connect to the Internet and with each other. The available wireless spectrum is increasingly congested, with users demanding higher performance and reliability from their wireless connections. This thesis proposes a game-theoretic random access model, compliant with the IEEE 802.11 standard, which can be integrated into the distributed coordination function (DCF). The objective is to design a game theoretic model that potentially optimizes throughput and fairness in each node independently and, therefore, minimise channel access delay. This dissertation presents a game-theoretic MAC layer implementation for single-cell networks and centralised DCF in the presence of hidden terminals to show how game theory can be applied to improve wireless performance. A utility function is proposed, such that it can decouple the protocol's dynamic adaptation to channel load from collision detection. It is demonstrated that the proposed model can reach a Nash equilibrium that results in a relatively stable contention window, provided that a node adapts its behaviour to the idle rate of the broadcast channel, coupled with observation of its own transmission activity. This dissertation shows that the proposed game-theoretic model is capable of achieving much higher throughput than the standard IEEE 802.11 DCF with better short-time fairness and significant improvements in the channel access delay

    Efficient aggregate computations in large-scale dense wireless sensor networks

    Get PDF
    Tese de doutoramento em InformáticaAssuming a world where we can be surrounded by hundreds or even thousands of inexpensive computing nodes densely deployed, each one with sensing and wireless communication capabilities, the problem of efficiently dealing with the enormous amount of information generated by those nodes emerges as a major challenge. The research in this dissertation addresses this challenge. This research work proves that it is possible to obtain aggregate quantities with a timecomplexity that is independent of the number of nodes, or grows very slowly as the number of nodes increases. This is achieved by co-designing the distributed algorithms for obtaining aggregate quantities and the underlying communication system. This work describes (i) the design and implementation of a prioritized medium access control (MAC) protocol which enforces strict priorities over wireless channels and (ii) the algorithms that allow exploiting this MAC protocol to obtain the minimum (MIN), maximum (MAX) and interpolation of sensor values with a time-complexity that is independent of the number of nodes deployed, whereas other state-of-the-art approaches have a time-complexity that is dependent on the number of nodes. These techniques also enable to efficiently obtain estimates of the number of nodes (COUNT) and the median of the sensor values (MEDIAN). The novel approach proposed to efficiently obtain aggregate quantities in large-scale, dense wireless sensor networks (WSN) is based on the adaptation to wireless media of a MAC protocol, known as dominance/binary countdown, which existed previously only for wired media, and design algorithms that exploit this MAC protocol for efficient data aggregation. Designing and implementing such MAC protocol for wireless media is not trivial. For this reason, a substantial part of this work is focused on the development and implementation of WiDom (short for Wireless Dominance) - a wireless MAC protocol that enables efficient data aggregation in large-scale, dense WSN. An implementation of WiDom is first proposed under the assumption of a fully connected network (a network with a single broadcast domain). This implementation can be exploited to efficiently obtain aggregated quantities. WiDom can also implement static priority scheduling over wireless media. Therefore, a schedulability analysis for WiDom is also proposed. WiDom is then extended to operate in sensor networks where a single transmission cannot reach all nodes, in a network with multiple broadcast domains. These results are significant because often networks of nodes that take sensor readings are designed to be large scale, dense networks and it is exactly for such scenarios that the proposed distributed algorithms for obtaining aggregate quantities excel. The implementation and test of these distributed algorithms in a hardware platform developed shows that aggregate quantities in large-scale, dense wireless sensor systems can be obtained efficientlly.É possível prever um mundo onde estaremos rodeados por centenas ou até mesmo milhares de pequenos nós computacionais densamente instalados. Cada um destes nós será de dimensões muito reduzidas e possui capacidades para obter dados directamente do ambiente através de sensores e transmitir informação via rádio. Frequentemente, este tipo de redes são denominadas de redes de sensores sem fio. Perante tal cenário, o problema de lidar com a considerável quantidade de informação gerada por todos estes nós emerge como um desafio de grande relevância. A investigação apresentada nesta dissertação atenta neste desafio. Este trabalho de investigação prova que é possível obter quantidades agregadas com uma complexidade temporal que é independente do número de nós computacionais envolvidos, ou cresce muito lentamente quando o número de nós aumenta. Isto é conseguido através uma co-concepção dos algoritmos para obter quantidades agregadas e do sistema de comunicação subjacente. Este trabalho descreve (i) a concepção e implementação de um protocolo de acesso ao meio que garante prioridades estáticas em canais de comunicação sem fio e (ii) os algoritmos que permitem tirar partido deste protocolo de acesso ao meio para obter quantidades agregadas como o mínimo (MIN), máximo (MAX) e interpolação de valores obtidos a partir de sensores ambientais com uma complexidade que é independente do número de nós computacionais envolvidos. Estas técnicas também permitem obter, de forma eficiente, estimativas do número de nós (COUNT) e a mediana dos valores dos sensores (MEDIAN). A abordagem inovadora, proposta para obter de forma eficiente quantidades agregadas em redes de sensores sem fio de larga escala, é baseada na adaptação para meios de comunicação sem fio de um protocolo de acesso ao meio anteriormente apenas existente em sistemas cablados, e na concepção de algoritmos que tiram partido deste protocolo para agregação de dados eficiente. A concepção e implementação de tal protocolo de acesso ao meio não é trivial. Por esta razão, uma parte substancial deste trabalho é focada no desenvolvimento e implementação de um protocolo de acesso ao meio que permite agregação de dados eficiente em redes de sensores sem fio densas e de larga escala. Esta implementação é denominada de WiDom. A implementação do WiDom apresentada foi inicialmente desenvolvida assumindo que a rede é totalmente ligada (uma transmisão de um nó alcança todos os outros nós). Esta implementação pode ser explorada para obter quantidades agregadas de forma eficiente. Adicionalmente, o protocolo WiDom pode implementar escalonamento utilizando prioridades fixas, permitindo a proposta de uma análise de resposta temporal. Neste trabalho, o WiDom é também estendido para funcionar em redes onde a transmissão de um nó não pode alcançar todos os outros nós. Os resultados apresentados neste trabalho são relevantes porque as redes de sensores sem fio são frequentemente concebidas para serem densas e de larga escala. É exactamente nestes casos que os algoritmos propostos para obter quantidades agregadas de forma eficiente apresentam maiores vantagens. A implementação e teste destes algoritmos distribuídos numa plataforma especialmente desenvolvida para o efeito demonstra que de facto podem ser obtidas quandidades agregadas de forma eficiente, mesmo em redes de sensores sem fio densas e de larga escala.This research was partially developed at the Real-Time Computing System Research Centre (CISTER), from the School of Engineering of the Polytechnic of Porto (ISEP/IPP

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Probabilistic methods for distributed information dissemination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 457-484).The ever-increasing growth of modern networks comes with a paradigm shift in network operation. Networks can no longer be abstracted as deterministic, centrally controlled systems with static topologies but need to be understood as highly distributed, dynamic systems with inherent unreliabilities. This makes many communication, coordination and computation tasks challenging and in many scenarios communication becomes a crucial bottleneck. In this thesis, we develop new algorithms and techniques to address these challenges. In particular we concentrate on broadcast and information dissemination tasks and introduce novel ideas on how randomization can lead to powerful, simple and practical communication primitives suitable for these modern networks. In this endeavor we combine and further develop tools from different disciplines trying to simultaneously addresses the distributed, information theoretic and algorithmic aspects of network communication. The two main probabilistic techniques developed to disseminate information in a network are gossip and random linear network coding. Gossip is an alternative to classical flooding approaches: Instead of nodes repeatedly forwarding information to all their neighbors, gossiping nodes forward information only to a small number of (random) neighbors. We show that, when done right, gossip disperses information almost as quickly as flooding, albeit with a drastically reduced communication overhead. Random linear network coding (RLNC) applies when a large amount of information or many messages are to be disseminated. Instead of routing messages through intermediate nodes, that is, following a classical store-and-forward approach, RLNC mixes messages together by forwarding random linear combinations of messages. The simplicity and topology-obliviousness of this approach makes RLNC particularly interesting for the distributed settings considered in this thesis. Unfortunately the performance of RLNC was not well understood even for the simplest such settings. We introduce a simple yet powerful analysis technique that allows us to prove optimal performance guarantees for all settings considered in the literature and many more that were not analyzable so far. Specifically, we give many new results for RLNC gossip algorithms, RLNC algorithms for dynamic networks, and RLNC with correlated data. We also provide a novel highly efficient distributed implementation of RLNC that achieves these performance guarantees while buffering only a minimal amount of information at intermediate nodes. We then apply our techniques to improve communication primitives in multi-hop radio networks. While radio networks inherently support broadcast communications, e.g., from one node to all surrounding nodes, interference of simultaneous transmissions makes multihop broadcast communication an interesting challenge. We show that, again, randomization holds the key for obtaining simple, efficient and distributed information dissemination protocols. In particular, using random back-off strategies to coordinate access to the shared medium leads to optimal gossip-like communications and applying RLNC achieves the first throughput-optimal multi-message communication primitives. Lastly we apply our probabilistic approach for analyzing simple, distributed propagation protocols in a broader context by studying algorithms for the Lovász Local Lemma. These algorithms find solutions to certain local constraint satisfaction problems by randomly fixing and propagating violations locally. Our two main results show that, firstly, there are also efficient deterministic propagation strategies achieving the same and, secondly, using the random fixing strategy has the advantage of producing not just an arbitrary solution but an approximately uniformly random one. Both results lead to simple, constructions for a many locally consistent structures of interest that were not known to be efficiently constructable before.by Bernhard Haeupler.Ph.D

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets
    corecore