171 research outputs found

    Design And Analysis Of Scalable Video Streaming Systems

    Get PDF
    Despite the advancement in multimedia streaming technology, many multimedia applications are still face major challenges, including provision of Quality-of-Service (QoS), system scalability, limited resources, and cost. In this dissertation, we develop and analyze a new set of metrics based on two particular video streaming systems, namely: (1) Video-on-Demand (VOD) with video advertisements system and (2) Automated Video Surveillance System (AVS). We address the main issues in the design of commercial VOD systems: scalability and support of video advertisements. We develop a scalable delivery framework for streaming media content with video advertisements. The delivery framework combines the benefits of stream merging and periodic broadcasting. In addition, we propose new scheduling policies that are well-suited for the proposed delivery framework. We also propose a new prediction scheme of the ad viewing times, called Assign Closest Ad Completion Time (ACA). Moreover, we propose an enhanced business model, in which the revenue generated from advertisements is used to subsidize the price. Additionally, we investigate the support of targeted advertisements, whereby clients receive ads that are well-suited for their interests and needs. Furthermore, we provide the clients with the ability to select from multiple price options, each with an associate expected number of viewed ads. We provide detailed analysis of the proposed VOD system, considering realistic workload and a wide range of design parameters. In the second system, Automated Video Surveillance (AVS), we consider the system design for optimizing the subjects recognition probabilities. We focus on the management and the control of various Pan, Tilt, Zoom (PTZ) video cameras. In particular, we develop a camera management solution that provides the best tradeoff between the subject recognition probability and time complexity. We consider both subject grouping and clustering mechanisms. In subject grouping, we propose the Grid Based Grouping (GBG) and the Elevator Based P lanning (EBP) algorithms. In the clustering approach, we propose the (GBG) with Clustering (GBGC) and the EBP with Clustering (EBPC) algorithms. We characterize the impact of various factors on recognition probability. These factors include resolution, pose and zoom-distance noise. We provide detailed analysis of the camera management solution, considering realistic workload and system design parameters

    Maximizing Resource Utilization In Video Streaming Systems

    Get PDF
    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to utilize include server bandwidth, network bandwidth, battery life in battery operated devices, and processing time in limited processing power devices. In this work, we propose new techniques to maximize the utilization of video-on-demand (VOD) server resources. In addition to that, we propose new framework to maximize the utilization of the network bandwidth in wireless video streaming systems. Providing video streaming users in a VOD system with expected waiting times enhances their perceived quality-of-service (QoS) and encourages them to wait thereby increasing server utilization by increasing server throughput. In this work, we analyze waiting-time predictability in scalable video streaming. We also propose two prediction schemes and study their effectiveness when applied with various stream merging techniques and scheduling policies. The results demonstrate that the waiting time can be predicted accurately, especially when enhanced cost-based scheduling is applied. The combination of waiting-time prediction and cost-based scheduling leads to outstanding performance benefits. The achieved resource sharing by stream merging depends greatly on how the waiting requests are scheduled for service. Motivated by the development of cost-based scheduling, we investigate its effectiveness in great detail and discuss opportunities for further tunings and enhancements. Additionally, we analyze the effectiveness of incorporating video prediction results into the scheduling decisions. We also study the interaction between scheduling policies and the stream merging techniques and explore new ways for enhancements. The interest in video surveillance systems has grown dramatically during the last decade. Auto-mated video surveillance (AVS) serves as an efficient approach for the realtime detection of threats and for monitoring their progress. Wireless networks in AVS systems have limited available bandwidth that have to be estimated accurately and distributed efficiently. In this research, we develop two cross-layer optimization frameworks that maximize the bandwidth optimization of 802.11 wireless network. We develop a distortion-based cross-layer optimization framework that manages bandwidth in the wire-less network in such a way that minimizes the overall distortion. We also develop an accuracy-based cross-layer optimization framework in which the overall detection accuracy of the computer vision algorithm(s) running in the system is maximized. Both proposed frameworks manage the application rates and transmission opportunities of various video sources based on the dynamic network conditions to achieve their goals. Each framework utilizes a novel online approach for estimating the effective airtime of the network. Moreover, we propose a bandwidth pruning mechanism that can be used with the accuracy-based framework to achieve any desired tradeoff between detection accuracy and power consumption. We demonstrate the effectiveness of the proposed frameworks, including the effective air-time estimation algorithms and the bandwidth pruning mechanism, through extensive experiments using OPNET

    The Economics of Net Neutrality: Implications of Priority Pricing in Access Networks

    Get PDF
    This work systematically analyzes Net Neutrality from an economic point of view. To this end a framework is developed which helps to structure the Net Neutrality debate. Furthermore, the introduction of prioritization is studied by analyzing potential effects of Quality of Service (QoS) on Content and Service Providers (CSPs) and Internet Users (IUs)

    Measuring And Improving Internet Video Quality Of Experience

    Get PDF
    Streaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE) on top of todays Internet. This work presents the first empirical understanding of Internet’s video-QoE capabilities, and tools and protocols to efficiently infer and improve them. To infer video-QoE at arbitrary nodes in the Internet, we design and implement MintMOS: a lightweight, real-time, noreference framework for capturing perceptual quality. We demonstrate that MintMOS’s projections closely match with subjective surveys in accessing perceptual quality. We use MintMOS to characterize Internet video-QoE both at the link level and end-to-end path level. As an input to our study, we use extensive measurements from a large number of Internet paths obtained from various measurement overlays deployed using PlanetLab. Link level degradations of intra– and inter–ISP Internet links are studied to create an empirical understanding of their shortcomings and ways to overcome them. Our studies show that intra–ISP links are often poorly engineered compared to peering links, and that iii degradations are induced due to transient network load imbalance within an ISP. Initial results also indicate that overlay networks could be a promising way to avoid such ISPs in times of degradations. A large number of end-to-end Internet paths are probed and we measure delay, jitter, and loss rates. The measurement data is analyzed offline to identify ways to enable a source to select alternate paths in an overlay network to improve video-QoE, without the need for background monitoring or apriori knowledge of path characteristics. We establish that for any unstructured overlay of N nodes, it is sufficient to reroute key frames using a random subset of k nodes in the overlay, where k is bounded by O(lnN). We analyze various properties of such random subsets to derive simple, scalable, and an efficient path selection strategy that results in a k-fold increase in path options for any source-destination pair; options that consistently outperform Internet path selection. Finally, we design a prototype called source initiated frame restoration (SIFR) that employs random subsets to derive alternate paths and demonstrate its effectiveness in improving Internet video-QoE

    The economic effects of network neutrality: a policy perspective

    Get PDF
    Network neutrality - regulation of Internet service providers (ISPs) to ensure equal treatment of all traffic - is becoming something many people have heard about. While the context is technical, network neutrality ultimately boils down to economics. The political weight of the subject is heavy, and the international debate is fierce. Still, surprisingly little rigorous research appears to be behind it. In this paper, I review economic literature on network neutrality and ISP regulation, covering both practical and theoretical implications for the broadband market. I define the degrees of network neutrality with more granularity than papers so far, evaluate the qualitative economic effects of regulation, and describe the broadband market, frameworks for modeling it, and its peculiar economic characteristics. In particular, I review and compare different theoretical modeling approaches and models' predictions of the welfare effects of different regulatory regimes. Throughout the paper, I incorporate economic literature from relevant areas into the analysis. I do not make definite policy recommendations, but I draw conclusions that are potentially of interest from a policy point of view. My analysis would indicate that the complexity of the Internet ecosystem and interrelations between market participants make effective regulation difficult. There is no economic evidence that network neutrality generally increases total welfare. In fact, it turns out that from a well-rounded economic perspective, strong network neutrality appears in most cases as detrimental to both consumer surplus and total welfare. In certain scenarios, however, models predict that neutrality can increase static and dynamic efficiency. The results depend crucially on model specifications and parameters, which differ significantly across the literature. So far, there is no consensus among economists on the optimal level of ISP regulation. Market-driven solutions such as dynamic pricing might provide a way to circumvent the neutrality question. Verkkoneutraliteetti - teleoperaattorien sääntely tietoliikenteen tasa-arvoisen kohtelun varmistamiseksi - on astunut käsitteenä julkisuuteen. Vaikka konteksti onkin tekninen, verkkoneutraliteetti viime kädessä redusoituu taloustieteeseen. Aiheen poliittinen painoarvo on suuri ja kansainvälinen keskustelu kiivasta. Tästä huolimatta sen takaa vaikuttaa löytyvän yllättävän vähän tieteellistä tutkimusta. Lopputyössäni tarkastelen taloustieteellistä kirjallisuutta verkkoneutraliteetista ja teleoperaattorien sääntelystä ja sen vaikutuksia laajakaistamarkkinaan käytännöllisestä kuin myös teoreettisesta näkökulmasta. Määrittelen verkkoneutraliteetin asteet hienojakoisemmin kuin aikaisemmat julkaisut, arvioin sääntelyn laadullisia vaikutuksia ja kuvailen laajakaistamarkkinaa, viitekehyksiä sen mallintamiseksi sekä sen eriskummallisia taloudellisia piirteitä. Kuvaan teoreettisia lähestymistapoja ja merkittävimpien mallien ennusteita sääntelymallien hyvinvointivaikutuksista. Liitän analyysini relevanttiin taloustieteelliseen kirjallisuuteen. En anna suoria politiikkasuosituksia, mutta teen johtopäätöksiä, jotka ovat mahdollisesti mielenkiintoisia poliittisesta näkökulmasta. Analyysini perusteella vaikuttaa, että Internet-ekosysteemin monimutkaisuus ja toimijoiden väliset suhteet tekevät tehokkaasta sääntelystä vaikeaa. Taloustieteellistä näyttöä verkkoneutraliteetin hyvinvointia kasvattavista vaikutuksista ei ole. Tasapainoisesta taloudellisesta näkökulmasta katsottuna tiukka neutraliteettisääntely näyttää useimmissa tapauksissa sekä pienentävän kuluttajan ylijäämää että laskevan kokonaishyvinvointia. Joissakin skenaarioissa mallit toisaalta ennustavat neutraliteetin lisäävän staattista ja dynaamista tehokkuutta. Tulokset riippuvat rajusti mallin rakenteesta ja parametreistä, jotka vaihtelevat merkittävästi tutkimuksesta tutkimukseen. Toistaiseksi taloustieteilijät eivät ole päässeet yhteisymmärrykseen optimaalisesta teleoperaattorien sääntelyn asteesta. Markkinalähtöiset ratkaisut kuten dynaaminen hinnoittelu saattavat mahdollistaa neutraliteettikysymyksen kiertämisen

    Service management for multi-domain Active Networks

    Get PDF
    The Internet is an example of a multi-agent system. In our context, an agent is synonymous with network operators, Internet service providers (ISPs) and content providers. ISPs mutually interact for connectivity's sake, but the fact remains that two peering agents are inevitably self-interested. Egoistic behaviour manifests itself in two ways. Firstly, the ISPs are able to act in an environment where different ISPs would have different spheres of influence, in the sense that they will have control and management responsibilities over different parts of the environment. On the other hand, contention occurs when an ISP intends to sell resources to another, which gives rise to at least two of its customers sharing (hence contending for) a common transport medium. The multi-agent interaction was analysed by simulating a game theoretic approach and the alignment of dominant strategies adopted by agents with evolving traits were abstracted. In particular, the contention for network resources is arbitrated such that a self-policing environment may emerge from a congested bottleneck. Over the past 5 years, larger ISPs have simply peddled as fast as they could to meet the growing demand for bandwidth by throwing bandwidth at congestion problems. Today, the dire financial positions of Worldcom and Global Crossing illustrate, to a certain degree, the fallacies of over-provisioning network resources. The proposed framework in this thesis enables subscribers of an ISP to monitor and police each other's traffic in order to establish a well-behaved norm in utilising limited resources. This framework can be expanded to other inter-domain bottlenecks within the Internet. One of the main objectives of this thesis is also to investigate the impact on multi-domain service management in the future Internet, where active nodes could potentially be located amongst traditional passive routers. The advent of Active Networking technology necessitates node-level computational resource allocations, in addition to prevailing resource reservation approaches for communication bandwidth. Our motivation is to ensure that a service negotiation protocol takes account of these resources so that the response to a specific service deployment request from the end-user is consistent and predictable. To promote the acceleration of service deployment by means of Active Networking technology, a pricing model is also evaluated for computational resources (e.g., CPU time and memory). Previous work in these areas of research only concentrate on bandwidth (i.e., communication) - related resources. Our pricing approach takes account of both guaranteed and best-effort service by adapting the arbitrage theorem from financial theory. The central tenet for our approach is to synthesise insights from different disciplines to address problems in data networks. The greater parts of research experience have been obtained during direct and indirect participation in the 1ST-10561 project known as FAIN (Future Active IP Networks) and ACTS-AC338 project called MIAMI (Mobile Intelligent Agent for Managing the Information Infrastructure). The Inter-domain Manager (IDM) component was integrated as an integral part of the FAIN policy-based network management systems (PBNM). Its monitoring component (developed during the MIAMI project) learns about routing changes that occur within a domain so that the management system and the managed nodes have the same topological view of the network. This enabled our reservation mechanism to reserve resources along the existing route set up by whichever underlying routing protocol is in place

    Framework to facilitate smooth handovers between mobile IPv6 networks

    Get PDF
    Fourth generation (4G) mobile communication networks are characterised by heterogeneous access networks and IP based transport technologies. Different access technologies give users choices to select services such as levels of Quality of Service (QoS) support, business models and service providers. Flexibility of heterogeneous access is compounded by the overhead of scanning to discover accessible services, which added to the handoff latency. This thesis has developed mechanisms for service discovery and service selection, along with a novel proposal for mobility management architectures that reduced handoff latency. The service discovery framework included a service advertisement data repository and a single frequency band access mechanism, which enabled users to explore services offered by various operators with a reduced scanning overhead. The novel hierarchical layout of the repository enabled it to categorise information into various layers and facilitate location based information retrieval. The information made available by the repository included cost, bandwidth, Packet Loss (PL), latency, jitter, Bit Error Rate (BER), location and service connectivity information. The single frequency band access mechanism further enabled users to explore service advertisements in the absence of their main service providers. The single frequency access mechanism broadcasted service advertisements information piggybacked onto a router advertisement packet on a reserved frequency band for advertisements. Results indicated that scanning 13 channels on 802.11 b interface takes 189ms whereas executing a query with maximum permissible search parameters on the service advertisement data repository takes 67ms. A service selection algorithm was developed to make handoff decisions utilising the service advertisements acquired from the service discovery framework; based on a user's preference. The selection algorithm reduced the calculation overhead by eliminating unsuitable networks; based on interface compatibility, service provider location, unacceptable QoS (Quality of service) and unacceptable cost; from the selection process. The selection algorithm utilised cost, bandwidth, PL, latency, jitter, BER and terminal power for computing the most suitable network. Results indicated that the elimination based approach has improved the performance of the algorithm by 35% over non- elimination oriented selection procedures, even after utilising more selection parameters. The service discovery framework and the service selection algorithm are flexible enough to be employed in most mobility management architectures. The thesis recommends Seamless Mobile Internet Protocol (SMIP) as a mobility management scheme based on the simulation results. The SMIP protocol, a combination of Hierarchical Mobile Internet Protocol (HMIP) and Fast Mobile Internet Protocol (FMIP), suffered hand off latency increases when undergoing a global handoff due to HMIP. The proposed modification to the HMIP included the introduction of a coverage area overlap, to reduce the global handoff latency. The introduction of a Home Address (HA) in Wireless Local Area Networks (WLAN) binding table enabled seamless handoffs from WLANs by having a redirection mechanism for the user's packets after handoff. The thesis delivered a new mobility management architecture with mechanisms for service discovery and service selection. The proposed framework enabled user oriented, application centric and terminal based approach for selecting IPv6 networks
    corecore