147 research outputs found
Method and device for live-streaming with opportunistic mobile edge cloud offloading
A novel, pervasive approach to disseminating live streaming content combines secure distributed systems, WiFi multicast, erasure coding, source coding and opportunistic offloading using hyperlocal mobile edge clouds. The solution disclosed to the technical problem of disseminating live streaming content without requiring a substantial equipment, planning and deployment of appropriate network infrastructure points offers an 11 fold reduction on the infrastructural WiFi bandwidth usage without having to modify any existing software or firmware stacks while ensuring stream integrity, authorization and authentication
Real-time video streaming using peer-to-peer for video distribution
The growth of the Internet has led to research and development of several new and useful applications including video streaming. Commercial experiments are underway to determine the feasibility of multimedia broadcasting using packet based data networks alongside traditional over-the-air broadcasting. Broadcasting companies are offering low cost or free versions of video content online to both guage and at the same time generate popularity. In addition to television broadcasting, video streaming is used in a number of application areas including video conferencing, telecommuting and long distance education. Large scale video streaming has not become as widespread or widely deployed as could be expected. The reason for this is the high bandwidth requirement (and thus high cost) associated with video data. Provision of a constant stream of video data on a medium to large scale typically consumes a significant amount of bandwidth. An effect of this is that encoding bit rates are lowered and consequently video quality is degraded resulting in even slower uptake rates for video streaming services. The aim of this dissertation is to investigate peer-to-peer streaming as a potential solution to this bandwidth problem. The proposed peer-to-peer based solution relies on end user co-operation for video data distribution. This approach is highly effective in reducing the outgoing bandwidth requirement for the video streaming server. End users redistribute received video chunks amongst their respective peers and in so doing increase the potential capacity of the entire network for supporting more clients. A secondary effect of such a system is that encoding capabilities (including higher encoding bit rates or encoding of additional sub-channels) can be enhanced. Peer-to-peer distribution enables any regular user to stream video to large streaming networks with many viewers. This research includes a detailed overview of the fields of video streaming and peer-to-peer networking. Techniques for optimal video preparation and data distribution were investigated. A variety of academic and commercial peer-to-peer based multimedia broadcasting systems were analysed as a means to further define and place the proposed implementation in context with respect to other peercasting implementations. A proof-of-concept of the proposed implementation was developed, mathematically analyzed and simulated in a typical deployment scenario. Analysis was carried out to predict simulation performance and as a form of design evaluation and verification. The analysis highlighted some critical areas which resulted in adaptations to the initial design as well as conditions under which performance can be guaranteed. A simulation of the proof-of-concept system was used to determine the extent of bandwidth savings for the video server. The aim of the simulations was to show that it is possible to encode and deliver video data in real time over a peer-to-peer network. The proposed system achieved expectations and showed significant bandwidth savings for a sustantially large video streaming audience. The implementation was able to encode video in real time and continually stream video packets on time to connected peers while continually supporting network growth by connecting additional peers (or stream viewers). The system performed well and showed good performance under typical real world restrictions on available bandwith capacity.Dissertation (MEng)--University of Pretoria, 2009.Electrical, Electronic and Computer Engineeringunrestricte
On Fault Resilient Network-on-Chip for Many Core Systems
Rapid scaling of transistor gate sizes has increased the density of on-chip integration and paved the way for heterogeneous many-core systems-on-chip, significantly improving the speed of on-chip processing. The design of the interconnection network of these complex systems is a challenging one and the network-on-chip (NoC) is now the accepted scalable and bandwidth efficient interconnect for multi-processor systems on-chip (MPSoCs). However, the performance enhancements of technology scaling come at the cost of reliability as on-chip components particularly the network-on-chip become increasingly prone to faults. In this thesis, we focus on approaches to deal with the errors caused by such faults. The results of these approaches are obtained not only via time-consuming cycle-accurate simulations but also by analytical approaches, allowing for faster and accurate evaluations, especially for larger networks.
Redundancy is the general approach to deal with faults, the mode of which varies according to the type of fault. For the NoC, there exists a classification of faults into transient, intermittent and permanent faults. Transient faults appear randomly for a few cycles and may be caused by the radiation of particles. Intermittent faults are similar to transient faults, however, differing in the fact that they occur repeatedly at the same location, eventually leading to a permanent fault. Permanent faults by definition are caused by wires and transistors being permanently short or open. Generally, spatial redundancy or the use of redundant components is used for dealing with permanent faults. Temporal redundancy deals with failures by re-execution or by retransmission of data while information redundancy adds redundant information to the data packets allowing for error detection and correction. Temporal and information redundancy methods are useful when dealing with transient and intermittent faults.
In this dissertation, we begin with permanent faults in NoC in the form of faulty links and routers. Our approach for spatial redundancy adds redundant links in the diagonal direction to the standard rectangular mesh topology resulting in the hexagonal and octagonal NoCs. In addition to redundant links, adaptive routing must be used to bypass faulty components. We develop novel fault-tolerant deadlock-free adaptive routing algorithms for these topologies based on the turn model without the use of virtual channels. Our results show that the hexagonal and octagonal NoCs can tolerate all 2-router and 3-router faults, respectively, while the mesh has been shown to tolerate all 1-router faults. To simplify the restricted-turn selection process for achieving deadlock freedom, we devised an approach based on the channel dependency matrix instead of the state-of-the-art Duato's method of observing the channel dependency graph for cycles. The approach is general and can be used for the turn selection process for any regular topology.
We further use algebraic manipulations of the channel dependency matrix to analytically assess the fault resilience of the adaptive routing algorithms when affected by permanent faults. We present and validate this method for the 2D mesh and hexagonal NoC topologies achieving very high accuracy with a maximum error of 1%. The approach is very general and allows for faster evaluations as compared to the generally used cycle-accurate simulations. In comparison, existing works usually assume a limited number of faults to be able to analytically assess the network reliability. We apply the approach to evaluate the fault resilience of larger NoCs demonstrating the usefulness of the approach especially compared to cycle-accurate simulations.
Finally, we concentrate on temporal and information redundancy techniques to deal with transient and intermittent faults in the router resulting in the dropping and hence loss of packets. Temporal redundancy is applied in the form of ARQ and retransmission of lost packets. Information redundancy is applied by the generation and transmission of redundant linear combinations of packets known as random linear network coding. We develop an analytic model for flexible evaluation of these approaches to determine the network performance parameters such as residual error rates and increased network load. The analytic model allows to evaluate larger NoCs and different topologies and to investigate the advantage of network coding compared to uncoded transmissions.
We further extend the work with a small insight to the problem of secure communication over the NoC. Assuming large heterogeneous MPSoCs with components from third parties, the communication is subject to active attacks in the form of packet modification and drops in the NoC routers. Devising approaches to resolve these issues, we again formulate analytic models for their flexible and accurate evaluations, with a maximum estimation error of 7%
Recommended from our members
Interoperability of wireless communication technologies in hybrid networks: Evaluation of end-to-end interoperability issues and quality of service requirements
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues.
The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies).
The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks.
Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications.
The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way.
The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools.
This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks.
Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research.
It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites.
The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation
Recommended from our members
Wireless audio networking modifying the IEEE 802.11 standard to handle multi-channel real-time wireless audio networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityAudio networking is a rapidly increasing field which introduces new exiting possibilities for the professional audio industry. When well established, it will drastically change the way live sound systems will be designed, built and used. Today's networks have enough bandwidth that enables them to transfer hundreds of high quality audio channels, replacing analogue cables and intricate installations of conventional analogue audio systems. Currently there are many systems in the market that distribute audio over networks for live music and studio applications, but this technology is not yet widespread. The reasons that audio networks are not as popular as it was expected are mainly the lack of interoperability between different vendors and still, the need of a wired network infrastructure. Therefore, the development of a wireless digital audio networking system based on the existing widespread wireless technology is a major research challenge. However, the ΙΕΕΕ 802.11 standard, which is the primary wireless networking technology today, appears to be unable to handle this type of application despite the large bandwidth available. Apart from the well-known drawbacks of interference and security, encountered in all wireless data transmission systems, the way that ΙΕΕΕ 802.11 arbitrates the wireless channel access causes significantly high collision rate, low throughput and long overall delay. The aim of this research was to identify the causes that impede this technology to support real time wireless audio networks and to propose possible solutions. Initially the standard was tested thoroughly using a data traffic model which emulates a multi-channel real time audio environment. Broadcasting was found to be the optimal communication method, in order to satisfy the intolerance of live audio, when it comes to delay. The results were analysed and the drawback was identified in the hereditary weakness of the IEEE 802.11 standard to manage broadcasting, from multiple sources in the same network. To resolve this, a series of modifications was proposed for the Medium Access Control algorithm of the standard. First, the extended use of the "CTS-to-Self" control message was introduced in order to act as a protection mechanism in broadcasting, similar to the RTC/CTS protection mechanism, already used in unicast transmission. Then, an alternative "random backoff" method was proposed taking into account the characteristics of live audio wireless networks. For this method a novel "Exclusive Backoff Number Allocation" (EBNA) algorithm was designed aiming to minimize collisions. The results showed that significant improvement in throughput can be achieved using the above modifications but further improvement was needed, when it comes to delay, in order to reach the internationally accepted standards for real time audio delivery. Thus, a traffic adaptive version of the EBNA algorithm was designed. This algorithm monitors the traffic in the network, calculates the probability of collision and accordingly switches between classic IEEE 802.11 MAC and EBNA which is applied only between active stations, rather than to all stations in the network. All amendments were designed to operate as an alternative mode of the existing technology rather as an independent proprietary system. For this reason interoperability with classic IEEE 802.11 was also tested and analysed at the last part of this research. The results showed that the IEEE 802.11 standard, suitably modified, is able to support multiple broadcasting transmission and therefore it can be the platform upon which, the future wireless audio networks will be developed
Recent Developments on Mobile Ad-Hoc Networks and Vehicular Ad-Hoc Networks
This book presents collective works published in the recent Special Issue (SI) entitled "Recent Developments on Mobile Ad-Hoc Networks and Vehicular Ad-Hoc Networks”. These works expose the readership to the latest solutions and techniques for MANETs and VANETs. They cover interesting topics such as power-aware optimization solutions for MANETs, data dissemination in VANETs, adaptive multi-hop broadcast schemes for VANETs, multi-metric routing protocols for VANETs, and incentive mechanisms to encourage the distribution of information in VANETs. The book demonstrates pioneering work in these fields, investigates novel solutions and methods, and discusses future trends in these field
- …