193 research outputs found

    Transparency about net neutrality: A translation of the new European rules into a multi-stakeholder model

    Get PDF
    The new European framework directive contains a number of policy objectives in the area of net neutrality. In support of these objectives, the universal service directive includes a transparency obligation for ISPs. This paper proposes a multi-stakeholder model for the implementation of this transparency obligation. The model is a multi-stakeholder model in the sense that it treats the content and form of the transparent information in close connection with the parties involved in the provision of the information and the processes in which they take part. Another crucial property of the model is that it distinguishes between technical and user-friendly information. This distinction makes it possible to limit the obligation to ISPs to the information for which they are in the best position to provide: the technical information on the traffic management measures that they apply, e.g., which traffic streams are subject to special treatment? Which measures are applied and when? The public availability of this technical information creates the opportunity for the other parties in the model to step in and contribute to the formulation of the user-friendly information for end users: which applications and services receive special treatment? When is their effect noticeable? It is expected that the involvement of other parties will lead to multiple, complementary routes for the formulation of the user-friendly information. Thus, the user-friendly information emerges in ways driven by market players and stakeholders that would be difficult to design and lay down in advance in the transparency obligation. --net neutrality,transparency,traffic management

    Communicating in virtual worlds through an accessible Web 2.0 solution

    Full text link

    View-Upload Decoupling: A Redesign of Multi-Channel P2P Video Systems

    Get PDF
    Abstract—In current multi-channel live P2P video systems, there are several fundamental performance problems including exceedingly-large channel switching delays, long playback lags, and poor performance for less popular channels. These performance problems primarily stem from two intrinsic characteristics of multi-channel P2P video systems: channel churn and channelresource imbalance. In this paper, we propose a radically different cross-channel P2P streaming framework, called View-Upload Decoupling (VUD). VUD strictly decouples peer downloading from uploading, bringing stability to multichannel systems and enabling cross-channel resource sharing. We propose a set of peer assignment and bandwidth allocation algorithms to properly provision bandwidth among channels, and introduce substream swarming to reduce the bandwidth overhead. We evaluate the performance of VUD via extensive simulations as well with a PlanetLab implementation. Our simulation and PlanetLab results show that VUD is resilient to channel churn, and achieves lower switching delay and better streaming quality. In particular, the streaming quality of small channels is greatly improved. I

    Legislative and Regulatory Strategies for Providing Consumer Safeguards in a Convergent Information and Communications Marketplace

    Get PDF
    The Federal Communications Commission desires to apply a single regulatory category to services and service providers, a process the Commission can achieve when ventures concentrate on one function and offer one readily identifiable service, such as telephony. However, technological convergence, digitization and the ability of the Internet to handle many different service types within a single bitstream now make it possible for companies to offer quadruple play bundles of wireless and wireline telephony, video, and Internet access services. Following Comcast Corp. v. FCC, the FCC must rethink how to best serve the public interest and safeguard consumers. Absent a legislative remedy, the FCC has experienced great difficulty in finding ways to sanction ISP anticompetitive practices regulations within the Commission\u27s limited statutory authority. This article explains how the FCC backed itself into a corner when it sought to free the Internet of most regulatory oversight by determining that the information service classification applies to all Internet access technologies. Facing complaints about ISP anticompetitive practices, the FCC currently lacks explicit statutory authority to provide a needed remedy. The article also provides recommendations on how Congress and the FCC might recognize that convergent services, such as Internet access, combine both unregulated information service and telecommunications components in much the same way as wireless cellular telephone companies. The article recommends that in light of the ascending importance of Internet access and the lack of sustainable competition that would foster effective self-regulation, Congress should amend the Communications Act to authorize the FCC to apply limited Title II safeguards to ISPs that already wireless telephony

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    Understanding the performance of Internet video over residential networks

    Get PDF
    Video streaming applications are now commonplace among home Internet users, who typically access the Internet using DSL or Cable technologies. However, the effect of these technologies on video performance, in terms of degradations in video quality, is not well understood. To enable continued deployment of applications with improved quality of experience for home users, it is essential to understand the nature of network impairments and develop means to overcome them. In this dissertation, I demonstrate the type of network conditions experienced by Internet video traffic, by presenting a new dataset of the packet level performance of real-time streaming to residential Internet users. Then, I use these packet level traces to evaluate the performance of commonly used models for packet loss simulation, and finding the models to be insufficient, present a new type of model that more accurately captures the loss behaviour. Finally, to demonstrate how a better understanding of the network can improve video quality in a real application scenario, I evaluate the performance of forward error correction schemes for Internet video using the measurements. I show that performance can be poor, devise a new metric to predict performance of error recovery from the characteristics of the input, and validate that the new packet loss model allows more realistic simulations. For the effective deployment of Internet video systems to users of residential access networks, a firm understanding of these networks is required. This dissertation provides insights into the packet level characteristics that can be expected from such networks, and techniques to realistically simulate their behaviour, promoting development of future video applications

    Architecture and Protocol to Optimize Videoconference in Wireless Networks

    Full text link
    [EN] In the past years, videoconferencing (VC) has become an essential means of communications. VC allows people to communicate face to face regardless of their location, and it can be used for different purposes such as business meetings, medical assistance, commercial meetings, and military operations. There are a lot of factors in real-time video transmission that can affect to the quality of service (QoS) and the quality of experience (QoE). The application that is used (Adobe Connect, Cisco Webex, and Skype), the internet connection, or the network used for the communication can affect to the QoE. Users want communication to be as good as possible in terms of QoE. In this paper, we propose an architecture for videoconferencing that provides better quality of experience than other existing applications such as Adobe Connect, Cisco Webex, and Skype. We will test how these three applications work in terms of bandwidth, packets per second, and delay using WiFi and 3G/4G connections. Finally, these applications are compared to our prototype in the same scenarios as they were tested, and also in an SDN, in order to improve the advantages of the prototype.This work has been supported by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" within the project under Grant TIN2017-84802-C2-1-P.Jimenez, JM.; García-Navas, JL.; Lloret, J.; Romero Martínez, JO. (2020). Architecture and Protocol to Optimize Videoconference in Wireless Networks. Wireless Communications and Mobile Computing. 2020:1-22. https://doi.org/10.1155/2020/4903420S122202

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed

    Architectures and technologies for quality of service provisioning in next generation networks

    Get PDF
    A NGN is a telecommunication network that differs from classical dedicated networks because of its capability to provide voice, video, data and cellular services on the same infrastructure (Quadruple-Play). The ITU-T standardization body has defined the NGN architecture in three different and well-defined strata: the transport stratum which takes care of maintaining end-to-end connectivity, the service stratum that is responsible for enabling the creation and the delivery of services, and finally the application stratum where applications can be created and executed. The most important separation in this architecture is relative to transport and service stratum. The aim is to enable the flexibility to add, maintain and remove services without any impact on the transport layer; to enable the flexibility to add, maintain and remove transport technologies without any impact on the access to service, application, content and information; and finally the efficient cohesistence of multiple terminals, access technologies and core transport technologies. The Service Oriented Architecture (SOA) is a paradigm often used in systems deployment and integration for organizing and utilizing distributed capabilities under the control of different ownership domains. In this thesis, the SOA technologies in network architetures are surveyed following the NGN functional architecture as defined by the ITU-T. Within each stratum, the main logical functions that have been the subject of investigation according to a service-oriented approach have been highlighted. Moreover, a new definition of the NGN transport stratum functionalities according to the SOA paradigm is proposed; an implementation of the relevant services interfaces to analyze this approach with experimental results shows some insight on the potentialities of the proposed strategy. Within NGN architectures research topic, especially in IP-based network architectures, Traffic Engineering (TE) is referred to as a set of policies and algorithms aimed at balancing network traffic load so as to improve network resource utilization and guarantee the service specific end-to-end QoS. DS-TE technology extends TE functionalities to a per-class basis implementation by introducing a higher level of traffic classification which associates to each class type (CT) a constraint on bandwidth utilization. These constraints are set by defining and configuring a bandwidth constraint (BC) model whih drives resource utilization aiming to higher load balancing, higher QoS performance and lower call blocking rate. Default TE implementations relies on a centralized approach to bandwidth and routing management, that require external management entities which periodically collect network status information and provide management actions. However, due to increasing network complexity, it is desiderable that nodes automatically discover their environment, self-configure and update to adapt to changes. In this thesis the bandwidth management problem is approached adopting an autonomic and distributed approach. Each node has a self-management module, which monitors the unreserved bandwidth in adjacent nodes and adjusts the local bandwidth constraints so as to reduce the differences in the unreserved bandwidth of neighbor nodes. With this distributed and autonomic algorithm, BC are dinamically modified to drive routing decision toward the traffic balancing respecting the QoS constraints for each class-type traffic requests. Finally, Video on Demand (VoD) is a service that provides a video whenever the customer requests it. Realizing a VoD system by means of the Internet network requires architectures tailored to video features such as guaranteed bandwidths and constrained transmission delays: these are hard to be provided in the traditional Internet architecture that is not designed to provide an adequate quality of service (QoS) and quality of experience (QoE) to the final user. Typical VoD solutions can be grouped in four categories: centralized, proxy-based, Content Delivery Network(CDN) and Hybrid architectures. Hybrid architectures combine the employment of a centralized server with that of a Peer-to-peer (P2P) network. This approach can effectively reduce the server load and avoid network congestions close to the server site because the peers support the delivery of the video to other peers using a cache-and-relay strategy making use of their upload bandwidth. Anyway, in a peer-to-peer network each peer is free to join and leave the network without notice, bringing to the phenomena of peer churns. These dynamics are dangerous for VoD architectures, affecting the integrity and retainability of the service. In this thesis, a study aimed to evaluate the impact of the peer churn on the system performance is proposed. Starting from important relationships between system parameters such as playback buffer length, peer request rate, peer average lifetime and server upload rate, four different analytic models are proposed
    • …
    corecore