15 research outputs found
Mobile Conductance in Sparse Networks and Mobility-Connectivity Tradeoff
In this paper, our recently proposed mobile-conductance based analytical
framework is extended to the sparse settings, thus offering a unified tool for
analyzing information spreading in mobile networks. A penalty factor is
identified for information spreading in sparse networks as compared to the
connected scenario, which is then intuitively interpreted and verified by
simulations. With the analytical results obtained, the mobility-connectivity
tradeoff is quantitatively analyzed to determine how much mobility may be
exploited to make up for network connectivity deficiency.Comment: Accepted to ISIT 201
Inondation dans les réseaux dynamiques
International audienceCette note rĂ©sume nos travaux sur l'inondation dans les rĂ©seaux dynamiques. Ces derniers sont dĂ©finis Ă partir d'un processus Markovien de paramĂštres et gĂ©nĂ©rant des sĂ©quences de graphes sur un mĂȘme ensemble de sommets, et tels que est obtenu Ă partir de comme suit~: si alors avec probabilitĂ© , et si alors avec probabilitĂ© . Clementi et al. (PODC 2008) ont analysĂ© diffĂ©rent processus de diffusion de l'information dans de tels rĂ©seaux, et ont en particulier Ă©tabli un ensemble de bornes sur les performances de l'inondation. L'inondation consiste en un protocole Ă©lĂ©mentaire oĂč chaque n{\oe}ud apprenant une information Ă un temps la retransmet Ă tous ses voisins Ă toutes les Ă©tapes suivantes. Evidemment, en dĂ©pit de ses avantages en terme de simplicitĂ© et de robustesse, le protocole d'inondation souffre d'une utilisation abusive des ressources en bande passante. Dans cette note, nous montrons que l'inondation dans les rĂ©seaux dynamiques peut ĂȘtre mis en {\oe}uvre de façon Ă limiter le nombre de retransmissions d'une mĂȘme information, tout en prĂ©servant les performances en termes du temps mis par une information pour atteindre tous les n{\oe}uds du rĂ©seau. La principale difficultĂ© de notre Ă©tude rĂ©side dans les dĂ©pendances temporelles entre les connexions du rĂ©seaux Ă diffĂ©rentes Ă©tapes de temps
Fuzzy tuned gossip algorithms in mobile ad hoc networks
âThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." âCopyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.â DOI: 10.1109/MED.2009.5164552Mobile ad hoc networks (MANET) are modeled as agents that form communities without infrastructure, for a random period of time and with usually cooperative behavior. The nodes of MANET often carry information to disseminate. The dynamics of information delivery, mostly referred as average consensus, is a common problem in these networks. The gossip protocols are designed to implement this task. The standard algorithms that are used in these protocols exploit the network describing matrix, aka Laplacian, and exchange information to all node neighbors. In static networks the problem can be considered as an output feedback problem but in the case of MANET the problem is getting complicated due to the continuous change of network topology. In this paper the fuzzy reasoning approach is proposed to tune and leverage the gossip protocol. Illustrative simulations are included to demonstrate the application of the method and to present comparative results in various cases
Epidemic Spreading with External Agents
We study epidemic spreading processes in large networks, when the spread is
assisted by a small number of external agents: infection sources with bounded
spreading power, but whose movement is unrestricted vis-\`a-vis the underlying
network topology. For networks which are `spatially constrained', we show that
the spread of infection can be significantly speeded up even by a few such
external agents infecting randomly. Moreover, for general networks, we derive
upper-bounds on the order of the spreading time achieved by certain simple
(random/greedy) external-spreading policies. Conversely, for certain common
classes of networks such as line graphs, grids and random geometric graphs, we
also derive lower bounds on the order of the spreading time over all
(potentially network-state aware and adversarial) external-spreading policies;
these adversarial lower bounds match (up to logarithmic factors) the spreading
time achieved by an external agent with a random spreading policy. This
demonstrates that random, state-oblivious infection-spreading by an external
agent is in fact order-wise optimal for spreading in such spatially constrained
networks
Gossip Algorithms for Distributed Signal Processing
Gossip algorithms are attractive for in-network processing in sensor networks
because they do not require any specialized routing, there is no bottleneck or
single point of failure, and they are robust to unreliable wireless network
conditions. Recently, there has been a surge of activity in the computer
science, control, signal processing, and information theory communities,
developing faster and more robust gossip algorithms and deriving theoretical
performance guarantees. This article presents an overview of recent work in the
area. We describe convergence rate results, which are related to the number of
transmitted messages and thus the amount of energy consumed in the network for
gossiping. We discuss issues related to gossiping over wireless links,
including the effects of quantization and noise, and we illustrate the use of
gossip algorithms for canonical signal processing tasks including distributed
estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page