206 research outputs found
A universal, operational theory of multi-user communication with fidelity criteria
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 237-239).This thesis has two flavors: 1. A theory of universal multi-user communication with fidelity criteria: We prove the optimality of digital communication for universal multi-user communication with fidelity criteria, both in the point-to-point setting and in the multi-user setting. In other words, we prove a universal source-channel separation theorem for communication with a distortion criterion, both in the point-to-point setting and the multi-user setting. In the multi-user setting, the setting is unicast, that is, the sources which various users want to communicate to each other are independent of each other. The universality is over the medium of communication: we assume that the medium might belong to a family. Both in the point-to-point setting, we assume that codes can be random: the encoder might come from a family of deterministic codes and the decoder has access to the particular realization of the deterministic code, and finally, an average is taken over all these deterministic codes. In Shannon's theory, random-coding is a proof technique. However, in our setting, random codes are essential: universal source-channel separation does not hold if codes are not allowed to be random. This happens because we are asking the universal question. We also show the partial applicability of our results to the traditional wireless telephony problem. 2. An operational theory of communication with a fidelity criterion: We prove the source-channel separation theorem operationally: we rely only on definitions of channel capacity as the maximum rate of reliable communication and the rate-distortion function as the minimum rate needed to compress a source to within a certain distortion level. We do not rely on functional simplifications, for example, mutual information expressions for the proofs. By operational, we do not mean that what we are doing is "practically operational". The view that we have can also be viewed as a layered black-box view: if there is a black-box that is capable of one form of communication, then the black-box can be layered in order to accomplish another form of communication.by Mukul Agarwal.Ph.D
Information Spectrum Approach to the Source Channel Separation Theorem
A source-channel separation theorem for a general channel has recently been
shown by Aggrawal et. al. This theorem states that if there exist a coding
scheme that achieves a maximum distortion level d_{max} over a general channel
W, then reliable communication can be accomplished over this channel at rates
less then R(d_{max}), where R(.) is the rate distortion function of the source.
The source, however, is essentially constrained to be discrete and memoryless
(DMS). In this work we prove a stronger claim where the source is general,
satisfying only a "sphere packing optimality" feature, and the channel is
completely general. Furthermore, we show that if the channel satisfies the
strong converse property as define by Han & verdu, then the same statement can
be made with d_{avg}, the average distortion level, replacing d_{max}. Unlike
the proofs there, we use information spectrum methods to prove the statements
and the results can be quite easily extended to other situations
Source Broadcasting to the Masses: Separation has a Bounded Loss
This work discusses the source broadcasting problem, i.e. transmitting a
source to many receivers via a broadcast channel. The optimal rate-distortion
region for this problem is unknown. The separation approach divides the problem
into two complementary problems: source successive refinement and broadcast
channel transmission. We provide bounds on the loss incorporated by applying
time-sharing and separation in source broadcasting. If the broadcast channel is
degraded, it turns out that separation-based time-sharing achieves at least a
factor of the joint source-channel optimal rate, and this factor has a positive
limit even if the number of receivers increases to infinity. For the AWGN
broadcast channel a better bound is introduced, implying that all achievable
joint source-channel schemes have a rate within one bit of the separation-based
achievable rate region for two receivers, or within bits for
receivers
Recommended from our members
Interoperability of wireless communication technologies in hybrid networks: Evaluation of end-to-end interoperability issues and quality of service requirements
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication âanywhere, any time with anyoneâ. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues.
The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies).
The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks.
Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications.
The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way.
The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools.
This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks.
Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research.
It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites.
The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation
Framework for Content Distribution over Wireless LANs
Wireless LAN (also called as Wi-Fi) is dominantly considered as the most pervasive
technology for Intent access. Due to the low-cost of chipsets and support for high data
rates, Wi-Fi has become a universal solution for ever-increasing application space
which includes, video streaming, content delivery, emergency communication,
vehicular communication and Internet-of-Things (IoT).
Wireless LAN technology is defined by the IEEE 802.11 standard. The 802.11
standard has been amended several times over the last two decades, to incorporate the
requirement of future applications. The 802.11 based Wi-Fi networks are
infrastructure networks in which devices communicate through an access point.
However, in 2010, Wi-Fi Alliance has released a specification to standardize direct
communication in Wi-Fi networks. The technology is called Wi-Fi Direct. Wi-Fi
Direct after 9 years of its release is still used for very basic services (connectivity, file
transfer etc.), despite the potential to support a wide range of applications. The reason
behind the limited inception of Wi-Fi Direct is some inherent shortcomings that limit
its performance in dense networks. These include the issues related to topology
design, such as non-optimal group formation, Group Owner selection problem,
clustering in dense networks and coping with device mobility in dynamic networks. Furthermore, Wi-Fi networks also face challenges to meet the growing number of Wi
Fi users. The next generation of Wi-Fi networks is characterized as ultra-dense
networks where the topology changes frequently which directly affects the network
performance. The dynamic nature of such networks challenges the operators to design
and make optimum planifications.
In this dissertation, we propose solutions to the aforementioned problems. We
contributed to the existing Wi-Fi Direct technology by enhancing the group formation
process. The proposed group formation scheme is backwards-compatible and
incorporates role selection based on the device's capabilities to improve network
performance. Optimum clustering scheme using mixed integer programming is
proposed to design efficient topologies in fixed dense networks, which improves
network throughput and reduces packet loss ratio. A novel architecture using
Unmanned Aeriel Vehicles (UAVs) in Wi-Fi Direct networks is proposed for
dynamic networks. In ultra-dense, highly dynamic topologies, we propose cognitive
networks using machine-learning algorithms to predict the network changes ahead of
time and self-configuring the network
Business scenarios, technical challenges and system requirements - D2.1
Deliverable D2.1 del projecte Europeu OneFIT (ICT-2009-257385)Preprin
A Secure and Efficient Communications Architecture for Global Information Grid Users via Cooperating Space Assets
With the Information Age in full and rapid development, users expect to have global, seamless, ubiquitous, secure, and efficient communications capable of providing access to real-time applications and collaboration. The United States Department of Defenseâs (DoD) Network-Centric Enterprise Services initiative, along with the notion of pushing the âpower to the edge,â aims to provide end-users with maximum situational awareness, a comprehensive view of the battlespace, all within a secure networking environment. Building from previous AFIT research efforts, this research developed a novel security framework architecture to address the lack of efficient and scalable secure multicasting in the low earth orbit satellite network environment. This security framework architecture combines several key aspects of different secure group communications architectures in a new way that increases efficiency and scalability, while maintaining the overall system security level. By implementing this security architecture in a deployed environment with heterogeneous communications users, reduced re-keying frequency will result. Less frequent re-keying means more resources are available for throughput as compared to security overhead. This translates to more transparency to the end user; it will seem as if they have a âlarger pipeâ for their network links. As a proof of concept, this research developed and analyzed multiple mobile communication environment scenarios to demonstrate the superior re-keying advantage offered by the novel âHubenko Security Framework Architectureâ over traditional and clustered multicast security architectures. For example, in the scenario containing a heterogeneous mix of user types (Stationary, Ground, Sea, and Air), the Hubenko Architecture achieved a minimum ten-fold reduction in total keys distributed as compared to other known architectures. Another experiment demonstrated the Hubenko Architecture operated at 6% capacity while the other architectures operated at 98% capacity. In the 80% overall mobility experiment with 40% Air users, the other architectures re-keying increased 900% over the Stationary case, whereas the Hubenko Architecture only increased 65%. This new architecture is extensible to numerous secure group communications environments beyond the low earth orbit satellite network environment, including unmanned aerial vehicle swarms, wireless sensor networks, and mobile ad hoc networks
- âŠ