8 research outputs found

    Aggregation with fragment retransmission for very high-speed WLANs

    Get PDF
    In upcoming very high-speed WLANs the physical layer (PHY) rate may reach 600 Mbps. To achieve high efficiency at the medium access control (MAC) layer, we identify fundamental properties that must be satisfied by any CSMA/CA based MAC layer and develop a novel scheme called Aggregation with Fragment Retransmission (AFR). In the AFR scheme, multiple packets are aggregated into and transmitted in a single large frame. If errors happen during the transmission, only the corrupted fragments of the large frame are retransmitted. An analytic model is developed to evaluate the throughput and delay performance of AFR over a noisy channel, and to compare AFR with competing schemes in the literature. Optimal frame and fragment sizes are calculated using this model. Transmission delays are minimised by using a zero-waiting mechanism where frames are transmitted immediately once the MAC wins a transmission opportunity. We prove that zero waiting can achieve maximum throughput. As a complement to the theoretical analysis, we investigate by simulations the impact of AFR on the performance of realistic application traffic with diverse requirements. We have implemented the AFR scheme in the NS-2 simulator and present detailed results for TCP, VoIP and HDTV traffic. The AFR scheme described was developed as part of the 802.11n working group work. The analysis presented here is general enough to be extended to the proposed scheme in the upcoming 802.11n standard. Trends indicated by our simulation results should extend to any well-designed aggregation scheme

    Performance Improvement Of Mac Layer In Terms Of Reverse Direction Transmission Based On IEEE 802.11n

    Get PDF
    Medium access control (MAC) layer is one of the most prominent topics in the area of wireless networks. MAC protocols play a big role in improving the performance of wireless networks, and there are many challenges that have been addressed by the researchers to improve the performance of MAC layer in the family of IEEE 802.11. The physical data rate in IEEE 802.11n may reach 600 Mbps, this high data rate does not necessary transform into good performance efficiency, since the overhead at the MAC layer signifies that by augmenting PHY rates the effectiveness is automatically reduced. Therefore, the main objective of next generation wireless local area networks (WLANs) IEEE 802.11n is to achieve high throughput and able to support some applications such as TCP 100 Mbps and HDTV 20 Mbps and less delay. To mitigate the overhead and increase the MAC efficiency for IEEE 802.11n, one of the key enhancements at MAC layer in IEEE 802.11n is a reverse direction transmission. Reverse direction transmission mainly aims to accurately exchange the data between two devices, and does not support error recovery and correction; it drops the entire erroneous frame even though only a single bit error exists in the frame and then causes a retransmission overhead. Thus, two new schemes called (RD-SFF) Reverse Direction Single Frame Fragmentation and (RD-MFF) Reverse Direction Multi Frame Fragmentation are proposed in this study. The RD-SFF role is to aggregate the packets only into large frame, while RD-MFF aggregate both packets and frames into larger frame, then divided each data frame in both directions into subframes, Then it sends each subframe over reverse direction transmission. During the transmission, only the corrupted subframes need to be retransmited if an error occured, instead of the whole frame. Fragmentation method is also examined whereby the packets which are longer when compared to a threshold are split into fragments prior to being combined. The system is examined by simulation using NS-2. The simulation results show that the RD-SFF scheme significantly improves the performance over reverse direction transmission with single data frame up to 100%. In addition, the RD-MFF scheme improvers the performance over reverse direction transmission with multi data frames up to 44% based on network condition. These results show the benefits of fragmentation method in retransmission overhead and erroneous transmission. The results obtained by ON/OFF scheme takes into account the channel condition to show the benefits of our adaptive scheme in both ideal as well as erroneous networks. In conclusion, this research has achieved its stated objective of mitigation the overhead and increase the MAC efficiency for IEEE 802.11n. Additionally, the proposed schemes show a significant improvement over the reverse direction in changing network conditions to the current network state

    An Adaptive Packet Aggregation Algorithm (AAM) for Wireless Networks

    Get PDF
    Packet aggregation algorithms are used to improve the throughput performance by combining a number of packets into a single transmission unit in order to reduce the overhead associated with each transmission within a packet-based communications network. However, the throughput improvement is also accompanied by a delay increase. The biggest drawback of a significant number of the proposed packet aggregation algorithms is that they tend to only optimize a single metric, i.e. either to maximize throughput or to minimize delay. They do not permit an optimal trade-off between maximizing throughput and minimizing delay. Therefore, these algorithms cannot achieve the optimal network performance for mixed traffic loads containing a number of different types of applications which may have very different network performance requirements. In this thesis an adaptive packet aggregation algorithm called the Adaptive Aggregation Mechanism (AAM) is proposed which achieves an aggregation trade-off in terms of realizing the largest average throughput with the smallest average delay compared to a number of other popular aggregation algorithms under saturation conditions in wireless networks. The AAM algorithm is the first packet aggregation algorithm that employs an adaptive selection window mechanism where the selection window size is adaptively adjusted in order to respond to the varying nature of both the packet size and packet rate. This algorithm is essentially a feedback control system incorporating a hybrid selection strategy for selecting the packets. Simulation results demonstrate that the proposed algorithm can (a) achieve a large number of sub-packets per aggregate packet for a given delay and (b) significantly improve the performance in terms of the aggregation trade-off for different traffic loads. Furthermore, the AAM algorithm is a robust algorithm as it can significantly improve the performance in terms of the average throughput in error-prone wireless networks

    Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirements

    Get PDF
    Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Concurrent Multipath Transfer: Scheduling, Modelling, and Congestion Window Management

    Get PDF
    Known as smartphones, multihomed devices like the iPhone and BlackBerry can simultaneously connect to Wi-Fi and 4G LTE networks. Unfortunately, due to the architectural constraints of standard transport layer protocols like the transmission control protocol (TCP), an Internet application (e.g., a file transfer) can use only one access network at a time. Due to recent developments, however, concurrent multipath transfer (CMT) using the stream control transmission protocol (SCTP) can enable multihomed devices to exploit additional network resources for transport layer communications. In this thesis we explore a variety of techniques aimed at CMT and multihomed devices, such as: packet scheduling, transport layer modelling, and resource management. Some of our accomplishments include, but are not limited to: enhanced performance of CMT under delay-based disparity, a tractable framework for modelling the throughput of CMT, a comparison of modelling techniques for SCTP, a new congestion window update policy for CMT, and efficient use of system resources through optimization. Since the demand for a better communications system is always on the horizon, it is our goal to further the research and inspire others to embrace CMT as a viable network architecture; in hopes that someday CMT will become a standard part of smartphone technology

    Adaptive Encryption Techniques In Wireless Communication Channels With Tradeoffs Between Communication Reliability And Security

    Get PDF
    Encryption is a vital process to ensure the confidentiality of the information transmitted over an insecure wireless channel. However, the nature of the wireless channel tends to deteriorate because of noise, interference and fading. Therefore, a symmetrically encrypted transmitted signal will be received with some amount of error. Consequently, due to the strict avalanche criterion (sac), this error propagates during the decryption process, resulting in half the bits (on average) after decryption to be in error. In order to alleviate this amount of error, smart coding techniques and/or new encryption algorithms that take into account the nature of wireless channels are required. The solution for this problem could involve increasing the block and key lengths which might degrade the throughput of the channel. Moreover, these solutions might significantly increase the complexity of the encryption algorithms and hence to increase the cost of its implementation and use. Two main approaches have been folloto solve this problem, the first approach is based on developing an effective coding schemes and mechanisms, in order to minimize and correct the errors introduced by the channel. The second approach is more focused on inventing and implementing new encryption algorithms that encounter less error propagation, by alleviating the sac effect. Most of the research done using these two approaches lacked the comprehensiveness in their designs. Some of these works focused on improving the error performance and/or enhancing the security on the cost of complexity and throughput. In this work, we focus on solving the problem of encryption in wireless channels in a comprehensive way that considers all of the factors in its structure (error performance, security and complexity). New encryption algorithms are proposed, which are modifications to the standardized encryption algorithms and are shown to outperform the use of these algorithms in wireless channels in terms of security and error performance with a slight addition in the complexity. We introduce new modifications that improve the error performance for a certain required security level while achieving the highest possible throughput. We show how our proposed algorithm outperforms the use of other encryption algorithms in terms of the error performance, throughput, complexity, and is secure against all known encryption attacks. In addition, we study the effect of each round and s-box in symmetric encryption algorithms on the overall probability of correct reception at the receiver after encryption and the effect on the security is analyzed as well. Moreover, we perform a complete security, complexity and energy consumption analysis to evaluate the new developed encryption techniques and procedures. We use both analytical computations and computer simulations to evaluate the effectiveness of every modification we introduce in our proposed designs
    corecore