30 research outputs found

    A novel multimedia adaptation architecture and congestion control mechanism designed for real-time interactive applications

    Get PDF
    PhDThe increasing use of interactive multimedia applications over the Internet has created a problem of congestion. This is because a majority of these applications do not respond to congestion indicators. This leads to resource starvation for responsive flows, and ultimately excessive delay and losses for all flows therefore loss of quality. This results in unfair sharing of network resources and increasing the risk of network ‘congestion collapse’. Current Congestion Control Mechanisms such as ‘TCP-Friendly Rate Control’ (TFRC) have been able to achieve ‘fair-share’ of network resource when competing with responsive flows such as TCP, but TFRC’s method of congestion response (i.e. to reduce Packet Rate) is not ideally matched for interactive multimedia applications which maintain a fixed Frame Rate. This mismatch of the two rates (Packet Rate and Frame Rate) leads to buffering of frames at the Sender Buffer resulting in delay and loss, and an unacceptable reduction of quality or complete loss of service for the end-user. To address this issue, this thesis proposes a novel Congestion Control Mechanism which is referred to as ‘TCP-friendly rate control – Fine Grain Scalable’ (TFGS) for interactive multimedia applications. This new approach allows multimedia frames (data) to be sent as soon as they are generated, so that the multimedia frames can reach the destination as quickly as possible, in order to provide an isochronous interactive service. This is done by maintaining the Packet Rate of the Congestion Control Mechanism (CCM) at a level equivalent to the Frame Rate of the Multimedia Encoder.The response to congestion is to truncate the Packet Size, hence reducing the overall bitrate of the multimedia stream. This functionality of the Congestion Control Mechanism is referred to as Packet Size Truncation (PST), and takes advantage of adaptive multimedia encoding, such as Fine Grain Scalable (FGS), where the multimedia frame is encoded in order of significance, Most to Least Significant Bits. The Multimedia Adaptation Manager (MAM) truncates the multimedia frame to the size indicated by the Packet Size Truncation function of the CCM, accurately mapping user demand to available network resource. Additionally Fine Grain Scalable encoding can offer scalability at byte level granularity, providing a true match to available network resources. This approach has the benefits of achieving a ‘fair-share’ of network resource when competing with responsive flows (as similar to TFRC CCM), but it also provides an isochronous service which is of crucial benefit to real-time interactive services. Furthermore, results illustrate that an increased number of interactive multimedia flows (such as voice) can be carried over congested networks whilst maintaining a quality level equivalent to that of a standard landline telephone. This is because the loss and delay arising from the buffering of frames at the Sender Buffer is completely removed. Packets sent maintain a fixed inter-packet-gap-spacing (IPGS). This results in a majority of packets arriving at the receiving end at tight time intervals. Hence, this avoids the need of using large Playout (de-jitter) Buffer sizes and adaptive Playout Buffer configurations. As a result this reduces delay, improves interactivity and Quality of Experience (QoE) of the multimedia application

    Antitrust Oversight of an Antitrust Dispute: An Institutional Perspective on the Net Neutrality Debate

    Get PDF
    The term "net neutrality" describes various proposals for regulatory intervention in the Internet marketplace. For example, under one type of proposal embodied in pending legislation, regulators would ban a broadband Internet access provider (such as Comcast or Verizon) from reaching commercial agreements with particular applications and content providers to provide the sophisticated quality-of-service techniques needed to support unusually performance-sensitive applications and content, such as real-time video streaming or multiplayer online videogames. Such proposals will likely be, one way or the other, a principal focus of telecommunications policy for the next decade.They have captured the attention of Congress, where several bills on the topic have been introduced; of legal, economic, and technology scholars across the ideological spectrum; and, of principal interest here, two key federal agencies: the Federal Communications Commission and the Federal Trade Commission. Most discussions of net neutrality focus on the merits of the debate: on the substantive costs and benefits of government intervention in the broadband market. This paper focuses instead on the comparatively neglected institutional dimension of the debate: an inquiry into which federal agencies are best positioned to resolve net neutrality disputes when they arise. As the paper argues, the net neutrality controversy is best understood as a classic antitrust dispute about "vertical leveraging," and the institutions most likely to appreciate the economic complexities of that dispute are the nation's specialized antitrust agencies: the Justice Department and the FTC. Because these agencies regulate the economy at large rather than a single industry, they are less vulnerable than the FCC to capture by industry factions; they are less likely to develop industry-specific bureaucracies with incentives to keep themselves relevant through over-regulation; and, because of their firm grounding in antitrust enforcement, they are more likely to resolve competition-oriented disputes dispassionately and on their economic merits.The paper thus argues for reviving in this context the competition-policy model that prevailed for much of the final quarter of the last century: a regime in which antitrust authorities, rather than industry-specific regulators, take the lead in addressing vertical-leveraging claims against providers of telecommunications transmission platforms.

    Campus Communications Systems: Converging Technologies

    Get PDF
    This book is a rewrite of Campus Telecommunications Systems: Managing Change, a book that was written by ACUTA in 1995. In the past decade, our industry has experienced a thousand-fold increase in data rates as we migrated from 10 megabit links (10 million bits per second) to 10 gigabit links (10 billion bits per second), we have seen the National Telecommunications Policy completely revamped; we have seen the combination of voice, data, and video onto one network; and we have seen many of our service providers merge into larger corporations able to offer more diverse services. When this book was last written, A CUT A meant telecommunications, convergence was a mathematical term, triple play was a baseball term, and terms such as iPod, DoS, and QoS did not exist. This book is designed to be a communications primer to be used by new entrants into the field of communications in higher education and by veteran communications professionals who want additional information in areas other than their field of expertise. There are reference books and text books available on every topic discussed in this book if a more in-depth explanation is desired. Individual chapters were authored by communications professionals from various member campuses. This allowed the authors to share their years of experience (more years than many of us would care to admit to) with the community at large. Foreword Walt Magnussen, Ph.D. Preface Ron Kovac, Ph.D. 1 The Technology Landscape: Historical Overview . Walt Magnussen, Ph.D. 2 Emerging Trends and Technologies . Joanne Kossuth 3 Network Security . Beth Chancellor 4 Security and Disaster Planning and Management Marjorie Windelberg, Ph.D. 5 Student Services in a University Setting . Walt Magnussen, Ph.D. 6 Administrative Services David E. O\u27Neill 7 The Business Side of Information Technology George Denbow 8 The Role of Consultants . David C. Metz Glossary Michelle Narcavag

    IP-based virtual private networks and proportional quality of service differentiation

    Get PDF
    IP-based virtual private networks (VPNs) have the potential of delivering cost-effective, secure, and private network-like services. Having surveyed current enabling techniques, an overall picture of IP VPN implementations is presented. In order to provision the equivalent quality of service (QoS) of legacy connection-oriented layer 2 VPNs (e.g., Frame Relay and ATM), IP VPNs have to overcome the intrinsically best effort characteristics of the Internet. Subsequently, a hierarchical QoS guarantee framework for IP VPNs is proposed, stitching together development progresses from recent research and engineering work. To differentiate IP VPN QoS, the proportional QoS differentiation model, whose QoS specification granularity compromises that of IntServ and Diffserv, emerges as a potential solution. The investigation of its claimed capability of providing the predictable and controllable QoS differentiation is then conducted. With respect to the loss rate differentiation, the packet shortage phenomenon shown in two classical proportional loss rate (PLR) dropping schemes is studied. On the pursuit of a feasible solution, the potential of compromising the system resource, that is, the buffer, is ruled out; instead, an enhanced debt-aware mechanism is suggested to relieve the negative effects of packet shortage. Simulation results show that debt-aware partially curbs the biased loss rate ratios, and improves the queueing delay performance as well. With respect to the delay differentiation, the dynamic behavior of the average delay difference between successive classes is first analyzed, aiming to gain insights of system dynamics. Then, two classical delay differentiation mechanisms, that is,proportional average delay (PAD) and waiting time priority (WTP), are simulated and discussed. Based on observations on their differentiation performances over both short and long time periods, a combined delay differentiation (CDD) scheme is introduced. Simulations are utilized to validate this method. Both loss and delay differentiations are based on a series of differentiation parameters. Though previous work on the selection of delay differentiation parameters has been presented, that of loss differentiation parameters mostly relied on network operators\u27 experience. A quantitative guideline, based on the principles of queueing and optimization, is then proposed to compute loss differentiation parameters. Aside from analysis, the new approach is substantiated by numerical results

    Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirements

    Get PDF
    Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Network delay control through adaptive queue management

    Get PDF
    Timeliness in delivering packets for delay-sensitive applications is an important QoS (Quality of Service) measure in many systems, notably those that need to provide real-time performance. In such systems, if delay-sensitive traffic is delivered to the destination beyond the deadline, then the packets will be rendered useless and dropped after received at the destination. Bandwidth that is already scarce and shared between network nodes is wasted in relaying these expired packets. This thesis proposes that a deterministic per-hop delay can be achieved by using a dynamic queue threshold concept to bound delay of each node. A deterministic per-hop delay is a key component in guaranteeing a deterministic end-to-end delay. The research aims to develop a generic approach that can constrain network delay of delay-sensitive traffic in a dynamic network. Two adaptive queue management schemes, namely, DTH (Dynamic THreshold) and ADTH (Adaptive DTH) are proposed to realize the claim. Both DTH and ADTH use the dynamic threshold concept to constrain queuing delay so that bounded average queuing delay can be achieved for the former and bounded maximum nodal delay can be achieved for the latter. DTH is an analytical approach, which uses queuing theory with superposition of N MMBP-2 (Markov Modulated Bernoulli Process) arrival processes to obtain a mapping relationship between average queuing delay and an appropriate queuing threshold, for queue management. While ADTH is an measurement-based algorithmic approach that can respond to the time-varying link quality and network dynamics in wireless ad hoc networks to constrain network delay. It manages a queue based on system performance measurements and feedback of error measured against a target delay requirement. Numerical analysis and Matlab simulation have been carried out for DTH for the purposes of validation and performance analysis. While ADTH has been evaluated in NS-2 simulation and implemented in a multi-hop wireless ad hoc network testbed for performance analysis. Results show that DTH and ADTH can constrain network delay based on the specified delay requirements, with higher packet loss as a trade-off

    Best effort QoS support routing in mobile ad hoc networks

    Get PDF
    In the past decades, mobile traffic generated by devices such as smartphones, iphones, laptops and mobile gateways has been growing rapidly. While traditional direct connection techniques evolve to provide better access to the Internet, a new type of wireless network, mobile ad hoc network (MANET), has emerged. A MANET differs from a direct connection network in the way that it is multi-hopping and self-organizing and thus able to operate without the help of prefixed infrastructures. However, challenges such dynamic topology, unreliable wireless links and resource constraints impede the wide applications of MANETs. Routing in a MANET is complex because it has to react efficiently to unfavourable conditions and support traditional IP services. In addition, Quality of Service (QoS) provision is required to support the rapid growth of video in mobile traffic. As a consequence, tremendous efforts have been devoted to the design of QoS routing in MANETs, leading to the emergence of a number of QoS support techniques. However, the application independent nature of QoS routing protocols results in the absence of a one-for-all solution for MANETs. Meanwhile, the relative importance of QoS metrics in real applications is not considered in many studies. A Best Effort QoS support (BEQoS) routing model which evaluates and ranks alternative routing protocols by considering the relative importance of multiple QoS metrics is proposed in this thesis. BEQoS has two algorithms, SAW-AHP and FPP for different scenarios. The former is suitable for cases where uncertainty factors such as standard deviation can be neglected while the latter considers uncertainty of the problems. SAW-AHP is a combination of Simple Additive Weighting and Analytic Hierarchical Process in which the decision maker or network operator is firstly required to assign his/her preference of metrics with a specific number according to given rules. The comparison matrices are composed accordingly, based on which the synthetic weights for alternatives are gained. The one with the highest weight is the optimal protocol among all alternatives. The reliability and efficiency of SAW-AHP are validated through simulations. An integrated architecture, using evaluation results of SAW-AHP is proposed which incorporates the ad hoc technology into the existing WLAN and therefore provides a solution for the last mile access problems. The protocol selection induced cost and gains are also discussed. The thesis concludes by describing the potential application area of the proposed method. Fuzzy SAW-AHP is extended to accommodate the vagueness of the decision maker and complexity of problems such as standard deviation in simulations. The fuzzy triangular numbers are used to substitute the crisp numbers in comparison matrices in traditional AHP. Fuzzy Preference Programming (FPP) is employed to obtain the crisp synthetic weight for alternatives based on which they are ranked. The reliability and efficiency of SAW-FPP are demonstrated by simulations

    An Introduction to Computer Networks

    Get PDF
    An open textbook for undergraduate and graduate courses on computer networks

    Syringa Networks v. Idaho Department of Administration Clerk\u27s Record v. 1 Dckt. 38735

    Get PDF
    https://digitalcommons.law.uidaho.edu/idaho_supreme_court_record_briefs/1519/thumbnail.jp
    corecore