293 research outputs found

    A COMMUNICATION FRAMEWORK FOR MULTIHOP WIRELESS ACCESS AND SENSOR NETWORKS: ANYCAST ROUTING & SIMULATION TOOLS

    Get PDF
    The reliance on wireless networks has grown tremendously within a number of varied application domains, prompting an evolution towards the use of heterogeneous multihop network architectures. We propose and analyze two communication frameworks for such networks. A first framework is designed for communications within multihop wireless access networks. The framework supports dynamic algorithms for locating access points using anycast routing with multiple metrics and balancing network load. The evaluation shows significant performance improvement over traditional solutions. A second framework is designed for communication within sensor networks and includes lightweight versions of our algorithms to fit the limitations of sensor networks. Analysis shows that this stripped down version can work almost equally well if tailored to the needs of a sensor network. We have also developed an extensive simulation environment using NS-2 to test realistic situations for the evaluations of our work. Our tools support analysis of realistic scenarios including the spreading of a forest fire within an area, and can easily be ported to other simulation software. Lastly, we us our algorithms and simulation environment to investigate sink movements optimization within sensor networks. Based on these results, we propose strategies, to be addressed in follow-on work, for building topology maps and finding optimal data collection points. Altogether, the communication framework and realistic simulation tools provide a complete communication and evaluation solution for access and sensor networks

    Communication-Efficient Probabilistic Algorithms: Selection, Sampling, and Checking

    Get PDF
    Diese Dissertation behandelt drei grundlegende Klassen von Problemen in Big-Data-Systemen, für die wir kommunikationseffiziente probabilistische Algorithmen entwickeln. Im ersten Teil betrachten wir verschiedene Selektionsprobleme, im zweiten Teil das Ziehen gewichteter Stichproben (Weighted Sampling) und im dritten Teil die probabilistische Korrektheitsprüfung von Basisoperationen in Big-Data-Frameworks (Checking). Diese Arbeit ist durch einen wachsenden Bedarf an Kommunikationseffizienz motiviert, der daher rührt, dass der auf das Netzwerk und seine Nutzung zurückzuführende Anteil sowohl der Anschaffungskosten als auch des Energieverbrauchs von Supercomputern und der Laufzeit verteilter Anwendungen immer weiter wächst. Überraschend wenige kommunikationseffiziente Algorithmen sind für grundlegende Big-Data-Probleme bekannt. In dieser Arbeit schließen wir einige dieser Lücken. Zunächst betrachten wir verschiedene Selektionsprobleme, beginnend mit der verteilten Version des klassischen Selektionsproblems, d. h. dem Auffinden des Elements von Rang kk in einer großen verteilten Eingabe. Wir zeigen, wie dieses Problem kommunikationseffizient gelöst werden kann, ohne anzunehmen, dass die Elemente der Eingabe zufällig verteilt seien. Hierzu ersetzen wir die Methode zur Pivotwahl in einem schon lange bekannten Algorithmus und zeigen, dass dies hinreichend ist. Anschließend zeigen wir, dass die Selektion aus lokal sortierten Folgen – multisequence selection – wesentlich schneller lösbar ist, wenn der genaue Rang des Ausgabeelements in einem gewissen Bereich variieren darf. Dies benutzen wir anschließend, um eine verteilte Prioritätswarteschlange mit Bulk-Operationen zu konstruieren. Später werden wir diese verwenden, um gewichtete Stichproben aus Datenströmen zu ziehen (Reservoir Sampling). Schließlich betrachten wir das Problem, die global häufigsten Objekte sowie die, deren zugehörige Werte die größten Summen ergeben, mit einem stichprobenbasierten Ansatz zu identifizieren. Im Kapitel über gewichtete Stichproben werden zunächst neue Konstruktionsalgorithmen für eine klassische Datenstruktur für dieses Problem, sogenannte Alias-Tabellen, vorgestellt. Zu Beginn stellen wir den ersten Linearzeit-Konstruktionsalgorithmus für diese Datenstruktur vor, der mit konstant viel Zusatzspeicher auskommt. Anschließend parallelisieren wir diesen Algorithmus für Shared Memory und erhalten so den ersten parallelen Konstruktionsalgorithmus für Aliastabellen. Hiernach zeigen wir, wie das Problem für verteilte Systeme mit einem zweistufigen Algorithmus angegangen werden kann. Anschließend stellen wir einen ausgabesensitiven Algorithmus für gewichtete Stichproben mit Zurücklegen vor. Ausgabesensitiv bedeutet, dass die Laufzeit des Algorithmus sich auf die Anzahl der eindeutigen Elemente in der Ausgabe bezieht und nicht auf die Größe der Stichprobe. Dieser Algorithmus kann sowohl sequentiell als auch auf Shared-Memory-Maschinen und verteilten Systemen eingesetzt werden und ist der erste derartige Algorithmus in allen drei Kategorien. Wir passen ihn anschließend an das Ziehen gewichteter Stichproben ohne Zurücklegen an, indem wir ihn mit einem Schätzer für die Anzahl der eindeutigen Elemente in einer Stichprobe mit Zurücklegen kombinieren. Poisson-Sampling, eine Verallgemeinerung des Bernoulli-Sampling auf gewichtete Elemente, kann auf ganzzahlige Sortierung zurückgeführt werden, und wir zeigen, wie ein bestehender Ansatz parallelisiert werden kann. Für das Sampling aus Datenströmen passen wir einen sequentiellen Algorithmus an und zeigen, wie er in einem Mini-Batch-Modell unter Verwendung unserer im Selektionskapitel eingeführten Bulk-Prioritätswarteschlange parallelisiert werden kann. Das Kapitel endet mit einer ausführlichen Evaluierung unserer Aliastabellen-Konstruktionsalgorithmen, unseres ausgabesensitiven Algorithmus für gewichtete Stichproben mit Zurücklegen und unseres Algorithmus für gewichtetes Reservoir-Sampling. Um die Korrektheit verteilter Algorithmen probabilistisch zu verifizieren, schlagen wir Checker für grundlegende Operationen von Big-Data-Frameworks vor. Wir zeigen, dass die Überprüfung zahlreicher Operationen auf zwei „Kern“-Checker reduziert werden kann, nämlich die Prüfung von Aggregationen und ob eine Folge eine Permutation einer anderen Folge ist. Während mehrere Ansätze für letzteres Problem seit geraumer Zeit bekannt sind und sich auch einfach parallelisieren lassen, ist unser Summenaggregations-Checker eine neuartige Anwendung der gleichen Datenstruktur, die auch zählenden Bloom-Filtern und dem Count-Min-Sketch zugrunde liegt. Wir haben beide Checker in Thrill, einem Big-Data-Framework, implementiert. Experimente mit absichtlich herbeigeführten Fehlern bestätigen die von unserer theoretischen Analyse vorhergesagte Erkennungsgenauigkeit. Dies gilt selbst dann, wenn wir häufig verwendete schnelle Hash-Funktionen mit in der Theorie suboptimalen Eigenschaften verwenden. Skalierungsexperimente auf einem Supercomputer zeigen, dass unsere Checker nur sehr geringen Laufzeit-Overhead haben, welcher im Bereich von 2%2\,\% liegt und dabei die Korrektheit des Ergebnisses nahezu garantiert wird

    An Approach Based on Particle Swarm Optimization for Inspection of Spacecraft Hulls by a Swarm of Miniaturized Robots

    Get PDF
    The remoteness and hazards that are inherent to the operating environments of space infrastructures promote their need for automated robotic inspection. In particular, micrometeoroid and orbital debris impact and structural fatigue are common sources of damage to spacecraft hulls. Vibration sensing has been used to detect structural damage in spacecraft hulls as well as in structural health monitoring practices in industry by deploying static sensors. In this paper, we propose using a swarm of miniaturized vibration-sensing mobile robots realizing a network of mobile sensors. We present a distributed inspection algorithm based on the bio-inspired particle swarm optimization and evolutionary algorithm niching techniques to deliver the task of enumeration and localization of an a priori unknown number of vibration sources on a simplified 2.5D spacecraft surface. Our algorithm is deployed on a swarm of simulated cm-scale wheeled robots. These are guided in their inspection task by sensing vibrations arising from failure points on the surface which are detected by on-board accelerometers. We study three performance metrics: (1) proximity of the localized sources to the ground truth locations, (2) time to localize each source, and (3) time to finish the inspection task given a 75% inspection coverage threshold. We find that our swarm is able to successfully localize the present so

    An Empirical Methodology for Engineering Human Systems Integration

    Get PDF
    The systems engineering technical processes are not sufficiently supported by methods and tools that quantitatively integrate human considerations into early system design. Because of this, engineers must often rely on qualitative judgments or delay critical decisions until late in the system lifecycle. Studies reveal that this is likely to result in cost, schedule, and performance consequences. This dissertation presents a methodology to improve the application of systems engineering technical processes for design. This methodology is mathematically rigorous, is grounded in relevant theory, and applies extant human subjects data to critical systems development challenges. The methodology is expressed in four methods that support early systems engineering activities: a requirements elicitation method, a function allocation method, an input device design method, and a display layout design method. These form a coherent approach to early system development. Each method is separately discussed and demonstrated using a prototypical system development program. In total, this original and significant work has a broad range of systems engineer applicability to improve the engineering of human systems integration

    Mobile Ad-Hoc Networks

    Get PDF
    Being infrastructure-less and without central administration control, wireless ad-hoc networking is playing a more and more important role in extending the coverage of traditional wireless infrastructure (cellular networks, wireless LAN, etc). This book includes state-of the-art techniques and solutions for wireless ad-hoc networks. It focuses on the following topics in ad-hoc networks: vehicular ad-hoc networks, security and caching, TCP in ad-hoc networks and emerging applications. It is targeted to provide network engineers and researchers with design guidelines for large scale wireless ad hoc networks

    GMPLS-OBS interoperability and routing acalability in internet

    Get PDF
    The popularization of Internet has turned the telecom world upside down over the last two decades. Network operators, vendors and service providers are being challenged to adapt themselves to Internet requirements in a way to properly serve the huge number of demanding users (residential and business). The Internet (data-oriented network) is supported by an IP packet-switched architecture on top of a circuit-switched, optical-based architecture (voice-oriented network), which results in a complex and rather costly infrastructure to the transport of IP traffic (the dominant traffic nowadays). In such a way, a simple and IP-adapted network architecture is desired. From the transport network perspective, both Generalized Multi-Protocol Label Switching (GMPLS) and Optical Burst Switching (OBS) technologies are part of the set of solutions to progress towards an IP-over-WDM architecture, providing intelligence in the control and management of resources (i.e. GMPLS) as well as a good network resource access and usage (i.e. OBS). The GMPLS framework is the key enabler to orchestrate a unified optical network control and thus reduce network operational expenses (OPEX), while increasing operator's revenues. Simultaneously, the OBS technology is one of the well positioned switching technologies to realize the envisioned IP-over-WDM network architecture, leveraging on the statistical multiplexing of data plane resources to enable sub-wavelength in optical networks. Despite of the GMPLS principle of unified control, little effort has been put on extending it to incorporate the OBS technology and many open questions still remain. From the IP network perspective, the Internet is facing scalability issues as enormous quantities of service instances and devices must be managed. Nowadays, it is believed that the current Internet features and mechanisms cannot cope with the size and dynamics of the Future Internet. Compact Routing is one of the main breakthrough paradigms on the design of a routing system scalable with the Future Internet requirements. It intends to address the fundamental limits of current stretch-1 shortest-path routing in terms of RT scalability (aiming at sub-linear growth). Although "static" compact routing works fine, scaling logarithmically on the number of nodes even in scale-free graphs such as Internet, it does not handle dynamic graphs. Moreover, as multimedia content/services proliferate, the multicast is again under the spotlight as bandwidth efficiency and low RT sizes are desired. However, it makes the problem even worse as more routing entries should be maintained. In a nutshell, the main objective of this thesis in to contribute with fully detailed solutions dealing both with i) GMPLS-OBS control interoperability (Part I), fostering unified control over multiple switching domains and reduce redundancy in IP transport. The proposed solution overcomes every interoperability technology-specific issue as well as it offers (absolute) QoS guarantees overcoming OBS performance issues by making use of the GMPLS traffic-engineering (TE) features. Keys extensions to the GMPLS protocol standards are equally approached; and ii) new compact routing scheme for multicast scenarios, in order to overcome the Future Internet inter-domain routing system scalability problem (Part II). In such a way, the first known name-independent (i.e. topology unaware) compact multicast routing algorithm is proposed. On the other hand, the AnyTraffic Labeled concept is also introduced saving on forwarding entries by sharing a single forwarding entry to unicast and multicast traffic type. Exhaustive simulation campaigns are run in both cases in order to assess the reliability and feasible of the proposals

    A Benchmarking Algorithm to Determine Minimum Aggregation Delay for Data Gathering Trees and an Analysis of the Diameter-Aggregation Delay Tradeoff

    No full text
    Aggregation delay is the minimum number of time slots required to aggregate data along the edges of a data gathering tree (DG tree) spanning all the nodes in a wireless sensor network (WSN). We propose a benchmarking algorithm to determine the minimum possible aggregation delay for DG trees in a WSN. We assume the availability of a sufficient number of unique CDMA (Code Division Multiple Access) codes for the intermediate nodes to simultaneously aggregate data from their child nodes if the latter are ready with the data. An intermediate node has to still schedule non-overlapping time slots to sequentially aggregate data from its own child nodes (one time slot per child node). We show that the minimum aggregation delay for a DG tree depends on the underlying design choices (bottleneck node-weight based or bottleneck link-weight based) behind its construction. We observe the bottleneck node-weight based DG trees incur a smaller diameter and a larger number of child nodes per intermediate node; whereas, the bottleneck link-weight based DG trees incur a larger diameter and a much lower number of child nodes per intermediate node. As a result, we observe a complex diameter-aggregation delay tradeoff for data gathering trees in WSNs

    Routing for Wireless Sensor Networks: From Collection to Event-Triggered Applications

    Get PDF
    Wireless Sensor Networks (WSNs) are collections of sensing devices using wireless communication to exchange data. In the past decades, steep advancements in the areas of microelectronics and communication systems have driven an explosive growth in the deployment of WSNs. Novel WSN applications have penetrated multiple areas, from monitoring the structural stability of historic buildings, to tracking animals in order to understand their behavior, or monitoring humans' health. The need to convey data from increasingly complex applications in a reliable and cost-effective manner translates into stringent performance requirements for the underlying WSNs. In the frame of this thesis, we have focused on developing routing protocols for multi-hop WSNs, that significantly improve their reliability, energy consumption and latency. Acknowledging the need for application-specific trade-offs, we have split our contribution into two parts. Part 1 focuses on collection protocols, catering to applications with high reliability and energy efficiency constraints, while the protocols developed in part 2 are subject to an additional bounded latency constraint. The two mechanisms introduced in the first part, WiseNE and Rep, enable the use of composite metrics, and thus significantly improve the link estimation accuracy and transmission reliability, at an energy expense far lower than the one achieved in previous proposals. The novel beaconing scheme WiseNE enables the energy-efficient addition of the RSSI (Received Signal Strength Indication) and LQI (Link Quality Indication) metrics to the link quality estimate by decoupling the sampling and exploration periods of each mote. This decoupling allows the use of the Trickle Algorithm, a key driver of protocols' energy efficiency, in conjunction with composite metrics. WiseNE has been applied to the Triangle Metric and validated in an online deployment. The section continues by introducing Rep, a novel sampling mechanism that leverages the packet repetitions already present in low-power preamble-sampling MAC protocols in order to improve the WSN energy consumption by one order of magnitude. WiseNE, Rep and the novel PRSSI (Penalized RSSI, a combination of PRR and RSSI) composite metric have been validated in a real smart city deployment. Part 2 introduces two mechanisms that were developed in the frame of the WiseSkin project (an initiative aimed at designing highly sensitive artificial skin for human limb prostheses), and are generally applicable to the domain of cyber-physical systems. It starts with Glossy-W, a protocol that leverages the superior energy-latency trade-off of flooding schemes based on concurrent transmissions. Glossy-W ensures the stringent synchronization requirements necessary for robust flooding, irrespective of the number of motes simultaneously reporting an event. Part 2 also introduces SCS (Synchronized Channel Sampling), a novel mechanism capable of reducing the power required for periodic polling, while maintaining the event detection reliability, and enhancing the network coexistence. The testbed experiments performed show that SCS manages to reduce the energy consumption of the state-of-the-art protocol Back-to-Back Robust Flooding by over one third, while maintaining an equivalent reliability, and remaining compatible with simultaneous event detection. SCS' benefits can be extended to the entire family of state-of-the-art protocols relying on concurrent transmissions

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery
    corecore