1,777 research outputs found
JTP: An Energy-conscious Transport Protocol for Wireless Ad Hoc Networks
Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.Defense Advanced Research Projects Agency (AFRL FA8750-06-C-0199
Security and Privacy Issues in Wireless Mesh Networks: A Survey
This book chapter identifies various security threats in wireless mesh
network (WMN). Keeping in mind the critical requirement of security and user
privacy in WMNs, this chapter provides a comprehensive overview of various
possible attacks on different layers of the communication protocol stack for
WMNs and their corresponding defense mechanisms. First, it identifies the
security vulnerabilities in the physical, link, network, transport, application
layers. Furthermore, various possible attacks on the key management protocols,
user authentication and access control protocols, and user privacy preservation
protocols are presented. After enumerating various possible attacks, the
chapter provides a detailed discussion on various existing security mechanisms
and protocols to defend against and wherever possible prevent the possible
attacks. Comparative analyses are also presented on the security schemes with
regards to the cryptographic schemes used, key management strategies deployed,
use of any trusted third party, computation and communication overhead involved
etc. The chapter then presents a brief discussion on various trust management
approaches for WMNs since trust and reputation-based schemes are increasingly
becoming popular for enforcing security in wireless networks. A number of open
problems in security and privacy issues for WMNs are subsequently discussed
before the chapter is finally concluded.Comment: 62 pages, 12 figures, 6 tables. This chapter is an extension of the
author's previous submission in arXiv submission: arXiv:1102.1226. There are
some text overlaps with the previous submissio
Ethernet - a survey on its fields of application
During the last decades, Ethernet progressively became the most widely used local area networking (LAN) technology. Apart from LAN installations, Ethernet became also attractive for many other fields of application, ranging from industry to avionics, telecommunication, and multimedia. The expanded application of this technology is mainly due to its significant assets like reduced cost, backward-compatibility, flexibility, and expandability. However, this new trend raises some problems concerning the services of the protocol and the requirements for each application. Therefore, specific adaptations prove essential to integrate this communication technology in each field of application. Our primary objective is to show how Ethernet has been enhanced to comply with the specific requirements of several application fields, particularly in transport, embedded and multimedia contexts. The paper first describes the common Ethernet LAN technology and highlights its main features. It reviews the most important specific Ethernet versions with respect to each application field’s requirements. Finally, we compare these different fields of application and we particularly focus on the fundamental concepts and the quality of service capabilities of each proposal
An Accountability Architecture for the Internet
In the current Internet, senders are not accountable for the packets they send. As a result, malicious users send unwanted traffic that wastes shared resources and degrades network performance. Stopping such attacks requires identifying the responsible principal and filtering any unwanted traffic it sends. However, senders can obscure their identity: a packet identifies its sender only by the source address, but the Internet Protocol does not enforce that this address be correct. Additionally, affected destinations have no way to prevent the sender from continuing to cause harm.
An accountable network binds sender identities to packets they send for the purpose of holding senders responsible for their traffic. In this dissertation, I present an accountable network-level architecture that strongly binds senders to packets and gives receivers control over who can send traffic to them. Holding senders accountable for their actions would prevent many of the attacks that disrupt the Internet today.
Previous work in attack prevention proposes methods of binding packets to senders, giving receivers control over who sends what to them, or both. However, they all require trusted elements on the forwarding path, to either assist in identifying the sender or to filter unwanted packets. These elements are often not under the control of the receiver and may become corrupt. This dissertation shows that the Internet architecture can be extended to allow receivers to block traffic from unwanted senders, even in the presence of malicious devices in the forwarding path.
This dissertation validates this thesis with three contributions. The first contribution is DNA, a network architecture that strongly binds packets to their sender, allowing routers to reject unaccountable traffic and recipients to block traffic from unwanted senders. Unlike prior work, which trusts on-path devices to behave correctly, the only trusted component in DNA is an identity certification authority. All other entities may misbehave and are either blocked or evicted from the network.
The second contribution is NeighborhoodWatch, a secure, distributed, scalable object store that is capable of withstanding misbehavior by its constituent nodes. DNA uses NeighborhoodWatch to store receiver-specific requests block individual senders.
The third contribution is VanGuard, an accountable capability architecture. Capabilities are small, receiver-generated tokens that grant the sender permission to send traffic to receiver. Existing capability architectures are not accountable, assume a protected channel for obtaining capabilities, and allow on-path devices to steal capabilities. VanGuard builds a capability architecture on top of DNA, preventing capability theft and protecting the capability request channel by allowing receivers to block senders that flood the channel. Once a sender obtains capabilities, it no longer needs to sign traffic, thus allowing greater efficiency than DNA alone.
The DNA architecture demonstrates that it is possible to create an accountable network architecture in which none of the devices on the forwarding path must be trusted. DNA holds senders responsible for their traffic by allowing receivers to block senders; to store this blocking state, DNA relies on the NeighborhoodWatch DHT. VanGuard extends DNA and reduces its overhead by incorporating capabilities, which gives destinations further control over the traffic that sources send to them
Cross-Layer Techniques for Efficient Medium Access in Wi-Fi Networks
IEEE 802.11 (Wi-Fi) wireless networks share the wireless medium using a
Carrier Sense Multiple Access (CSMA) Medium Access Control (MAC) protocol.
The MAC protocol is a central determiner of Wi-Fi networks’ efficiency–the
fraction of the capacity available in the physical layer that Wi-Fi-equipped
hosts can use in practice. The MAC protocol’s design is intended to allow
senders to share the wireless medium fairly while still allowing high utilisation.
This thesis develops techniques that allow Wi-Fi senders to send more data
using fewer medium acquisitions, reducing the overhead of idle periods, and
thus improving end-to-end goodput. Our techniques address the problems we
identify with Wi-Fi’s status quo. Today’s commodity Linux Wi-Fi/IP software
stack and Wi-Fi cards waste medium acquisitions as they fail to queue enough
packets that would allow for effective sending of multiple frames per wireless
medium acquisition. In addition, for bi-directional protocols such as TCP,
TCP data and TCP ACKs contend for the wireless channel, wasting medium
acquisitions (and thus capacity). Finally, the probing mechanism used for
bit-rate adaptation in Wi-Fi networks increases channel acquisition overhead.
We describe the design and implementation of Aggregate Aware Queueing
(AAQ), a fair queueing discipline, that coordinates scheduling of frame transmission
with the aggregation layer in the Wi-Fi stack, allowing more frames per
channel acquisition. Furthermore, we describe Hierarchical Acknowledgments
(HACK) and Transmission Control Protocol Acknowledgment Optimisation
(TAO), techniques that reduce channel acquisitions for TCP flows, further
improving goodput. Finally, we design and implement Aggregate Aware Rate Control (AARC), a bit-rate adaptation algorithm that reduces channel acquisition
overheads incurred by the probing mechanism common in today’s
commodity Wi-Fi systems. We implement our techniques on real Wi-Fi hardware
to demonstrate their practicality, and measure their performance on real
testbeds, using off-the-shelf commodity Wi-Fi hardware where possible, and
software-defined radio hardware for those techniques that require modification
of the Wi-Fi implementation unachievable on commodity hardware. The techniques
described in this thesis offer up to 2x aggregate goodput improvement
compared to the stock Linux Wi-Fi stack
Performance Comparison of Dual Connectivity and Hard Handover for LTE-5G Tight Integration in mmWave Cellular Networks
MmWave communications are expected to play a major role in the Fifth
generation of mobile networks. They offer a potential multi-gigabit throughput
and an ultra-low radio latency, but at the same time suffer from high isotropic
pathloss, and a coverage area much smaller than the one of LTE macrocells. In
order to address these issues, highly directional beamforming and a very
high-density deployment of mmWave base stations were proposed. This Thesis aims
to improve the reliability and performance of the 5G network by studying its
tight and seamless integration with the current LTE cellular network. In
particular, the LTE base stations can provide a coverage layer for 5G mobile
terminals, because they operate on microWave frequencies, which are less
sensitive to blockage and have a lower pathloss. This document is a copy of the
Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr.
Marco Mezzavilla and Prof. Michele Zorzi. It will propose an LTE-5G tight
integration architecture, based on mobile terminals' dual connectivity to LTE
and 5G radio access networks, and will evaluate which are the new network
procedures that will be needed to support it. Moreover, this new architecture
will be implemented in the ns-3 simulator, and a thorough simulation campaign
will be conducted in order to evaluate its performance, with respect to the
baseline of handover between LTE and 5G.Comment: Master's Thesis carried out by Mr. Michele Polese under the
supervision of Dr. Marco Mezzavilla and Prof. Michele Zorz
Content-Aware Multimedia Communications
The demands for fast, economic and reliable dissemination of multimedia
information are steadily growing within our society. While people and
economy increasingly rely on communication technologies, engineers still
struggle with their growing complexity.
Complexity in multimedia communication originates from several sources. The
most prominent is the unreliability of packet networks like the Internet.
Recent advances in scheduling and error control mechanisms for streaming
protocols have shown that the quality and robustness of multimedia delivery
can be improved significantly when protocols are aware of the content they
deliver. However, the proposed mechanisms require close cooperation between
transport systems and application layers which increases the overall system
complexity. Current approaches also require expensive metrics and focus on
special encoding formats only. A general and efficient model is missing so
far.
This thesis presents efficient and format-independent solutions to support
cross-layer coordination in system architectures. In particular, the first
contribution of this work is a generic dependency model that enables
transport layers to access content-specific properties of media streams,
such as dependencies between data units and their importance. The second
contribution is the design of a programming model for streaming
communication and its implementation as a middleware architecture. The
programming model hides the complexity of protocol stacks behind simple
programming abstractions, but exposes cross-layer control and monitoring
options to application programmers. For example, our interfaces allow
programmers to choose appropriate failure semantics at design time while
they can refine error protection and visibility of low-level errors at
run-time.
Based on some examples we show how our middleware simplifies the
integration of stream-based communication into large-scale application
architectures. An important result of this work is that despite cross-layer
cooperation, neither application nor transport protocol designers
experience an increase in complexity. Application programmers can even
reuse existing streaming protocols which effectively increases system
robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und
zuverlässiger
Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen
Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser
Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte
befriedigen als auch die wachsende Komplexität der Systeme beherrschen.
Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist
nicht trivial. Einer der prominentesten Gründe dafür ist die
Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste
und schwankende Laufzeiten können die Darstellungsqualität massiv
beeinträchtigen. Wie jüngste Entwicklungen im Bereich der
Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der
Übertragung effizient kontrollierbar, wenn Streamingprotokolle
Informationen über den Inhalt der transportierten Daten ausnutzen.
Existierende Ansätze, die den Inhalt von Multimediadatenströmen
beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren
spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert
ihren praktischen Nutzen deutlich. Außerdem erfordert der
Informationsaustausch eine enge Kooperation zwischen Applikationen und
Transportschichten. Da allerdings die Schnittstellen aktueller
Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die
Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen
werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität
eines Systems dadurch weiter erhöhen kann.
Das zentrale Ziel dieser Dissertation ist es deshalb,
schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der
Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum
aktuellen Stand der Forschung. Erstens definiert sie ein universelles
Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und
Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten
können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens
beschreibt die Arbeit das Noja Programmiermodell für multimediale
Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle
multimedialer Ströme, die die Koordination von Streamingprotokollen mit
Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete
Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten
Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
Implementation of the 64-meter-diameter Antennas at the Deep Space Stations in Australia and Spain
The management and construction aspects of the Overseas 64-m Antenna Project in which two 64-m antennas were constructed at the Tidbinbilla Deep Space Communications Complex in Australia, and at the Madrid Deep Space Communications Complex in Spain are described. With the completion of these antennas the Deep Space Network is equipped with three 64-m antennas spaced around the world to maintain continuous coverage of spacecraft operations. These antennas provide approximately a 7-db gain over the capabilities of the existing 26-m antenna nets. The report outlines the project organization and management, resource utilization, fabrication, quality assurance, and construction methods by which the project was successfully completed. Major problems and their solutions are described as well as recommendations for future projects
Flow-oriented anomaly-based detection of denial of service attacks with flow-control-assisted mitigation
Flooding-based distributed denial-of-service (DDoS) attacks present a serious and major threat to the targeted enterprises and hosts. Current protection technologies are still largely inadequate in mitigating such attacks, especially if they are large-scale. In this doctoral dissertation, the Computer Network Management and Control System (CNMCS) is proposed and investigated; it consists of the Flow-based Network Intrusion Detection System (FNIDS), the Flow-based Congestion Control (FCC) System, and the Server Bandwidth Management System (SBMS). These components form a composite defense system intended to protect against DDoS flooding attacks. The system as a whole adopts a flow-oriented and anomaly-based approach to the detection of these attacks, as well as a control-theoretic approach to adjust the flow rate of every link to sustain the high priority flow-rates at their desired level. The results showed that the misclassification rates of FNIDS are low, less than 0.1%, for the investigated DDOS attacks, while the fine-grained service differentiation and resource isolation provided within the FCC comprise a novel and powerful built-in protection mechanism that helps mitigate DDoS attacks
- …