1,135 research outputs found

    Adoption of authenticated Peer-to-Peer academic networks - a case study of a failure

    Get PDF
    The use of P2P applications in universities has been mainly focused on questions related to file sharing and copyright. violation, and little attention has been given to the development of secured and authenticated P2P applications, specially conceived to academic environments. In this paper, we describe Bumerang, an authenticated campus P2P network, which despite technological quality and top level institutional commitment, didn't reach critical mass of users, failing at the individual adoption level. To understand the factors that contributed for this result, we make a retrospective analysis of the process of conception and diffusion including results from the network activity. We conclude that we must reinforce their perceived utility, deal with the security concerns with new approaches and stay away from using the P2P term.- (undefined

    Dis-empowering Users vs. Maintaining Internet Freedom: Network Management and Quality of Service (QoS)

    Get PDF

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Smart PIN: performance and cost-oriented context-aware personal information network

    Get PDF
    The next generation of networks will involve interconnection of heterogeneous individual networks such as WPAN, WLAN, WMAN and Cellular network, adopting the IP as common infrastructural protocol and providing virtually always-connected network. Furthermore, there are many devices which enable easy acquisition and storage of information as pictures, movies, emails, etc. Therefore, the information overload and divergent content’s characteristics make it difficult for users to handle their data in manual way. Consequently, there is a need for personalised automatic services which would enable data exchange across heterogeneous network and devices. To support these personalised services, user centric approaches for data delivery across the heterogeneous network are also required. In this context, this thesis proposes Smart PIN - a novel performance and cost-oriented context-aware Personal Information Network. Smart PIN's architecture is detailed including its network, service and management components. Within the service component, two novel schemes for efficient delivery of context and content data are proposed: Multimedia Data Replication Scheme (MDRS) and Quality-oriented Algorithm for Multiple-source Multimedia Delivery (QAMMD). MDRS supports efficient data accessibility among distributed devices using data replication which is based on a utility function and a minimum data set. QAMMD employs a buffer underflow avoidance scheme for streaming, which achieves high multimedia quality without content adaptation to network conditions. Simulation models for MDRS and QAMMD were built which are based on various heterogeneous network scenarios. Additionally a multiple-source streaming based on QAMMS was implemented as a prototype and tested in an emulated network environment. Comparative tests show that MDRS and QAMMD perform significantly better than other approaches

    Copyrights in the Stream: The Battle on Webcasting

    Get PDF
    The Internet threatens many right holders who consistently battle against technologies that enable people to use their copyrighted materials without their consent. While copyright holders have succeeded in some cases, their main battle against peer-to-peer (P2P) file-sharing has yet to be resolved. Another technology that threatens right holders’ business models, especially in the film industry, is the distribution of their content freely via webcasting. Although right holders have paid little attention to webcasting as they continue their campaign against P2P file-sharing, it poses similar threats and presents the likely possibility of a future copyright battle. This Article examines copyright and webcasting. I analyze webcasting in comparison to past and current wars on copyright, trying to unveil major differences between the two. I argue that the current U.S. copyright régime treats webcasting inadequately and should be reexamined, especially vis-à-vis end-user’s actions since courts have yet to review cache copies created during Internet transmissions. I opine that future legal solutions proposed to handle webcasting, much like past attempts in similar matters, will be futile since technology will continue to evolve at a faster rate than legislation. Finally, I argue that the best solution to the current, as well as future, legal battles to protect copyrights should be the creation of a new business model similar to that of a levy system
    • …
    corecore