268 research outputs found

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Metodologias para caracterização de tráfego em redes de comunicações

    Get PDF
    Tese de doutoramento em Metodologias para caracterização de tráfego em redes de comunicaçõesInternet Tra c, Internet Applications, Internet Attacks, Tra c Pro ling, Multi-Scale Analysis abstract Nowadays, the Internet can be seen as an ever-changing platform where new and di erent types of services and applications are constantly emerging. In fact, many of the existing dominant applications, such as social networks, have appeared recently, being rapidly adopted by the user community. All these new applications required the implementation of novel communication protocols that present di erent network requirements, according to the service they deploy. All this diversity and novelty has lead to an increasing need of accurately pro ling Internet users, by mapping their tra c to the originating application, in order to improve many network management tasks such as resources optimization, network performance, service personalization and security. However, accurately mapping tra c to its originating application is a di cult task due to the inherent complexity of existing network protocols and to several restrictions that prevent the analysis of the contents of the generated tra c. In fact, many technologies, such as tra c encryption, are widely deployed to assure and protect the con dentiality and integrity of communications over the Internet. On the other hand, many legal constraints also forbid the analysis of the clients' tra c in order to protect their con dentiality and privacy. Consequently, novel tra c discrimination methodologies are necessary for an accurate tra c classi cation and user pro ling. This thesis proposes several identi cation methodologies for an accurate Internet tra c pro ling while coping with the di erent mentioned restrictions and with the existing encryption techniques. By analyzing the several frequency components present in the captured tra c and inferring the presence of the di erent network and user related events, the proposed approaches are able to create a pro le for each one of the analyzed Internet applications. The use of several probabilistic models will allow the accurate association of the analyzed tra c to the corresponding application. Several enhancements will also be proposed in order to allow the identi cation of hidden illicit patterns and the real-time classi cation of captured tra c. In addition, a new network management paradigm for wired and wireless networks will be proposed. The analysis of the layer 2 tra c metrics and the di erent frequency components that are present in the captured tra c allows an e cient user pro ling in terms of the used web-application. Finally, some usage scenarios for these methodologies will be presented and discussed

    Your WiFi Is Leaking: Inferring Private User Information Despite Encryption

    Get PDF
    This thesis describes how wireless networks can inadvertently leak and broadcast users' personal information despite the correct use of encryption. Users would likely assume that their activities (for example, the program or app they are using) and personal information (including age, religion, sexuality and gender) would remain confidential when using an encrypted network. However, we demonstrate how the analysis of encrypted traffic patterns can allow an observer to infer potentially sensitive data remotely, passively, undetectably, and without any network credentials. Without the ability to read encrypted WiFi traffic directly, the limited side-channel data available is processed. Following an investigation to determine what information is available and how it can be represented, it was determined that the comparison of various permutations of timing and frame size information is sufficient to distinguish specific user activities. The construction of classifiers via machine learning (Random Forests) utilising this side-channel information represented as histograms allows for the detection of user activity despite WiFi encryption. Studies showed that Skype voice traffic could be identified despite being interleaved with other activities. A subsequent study then demonstrated that mobile apps could be individually detected and, concerningly, used to infer potentially sensitive information about users due to their personalised nature. Furthermore, a full prototype system is developed and used to demonstrate that this analysis can be performed in real-time using low-cost commodity hardware in real-world scenarios. Avenues for improvement and the limitations of this approach are identified, and potential applications for this work are considered. Strategies to prevent these leaks are discussed and the effort required for an observer to present a practical privacy threat to the everyday WiFi user is examined

    Quadri-dimensional approach for data analytics in mobile networks

    Get PDF
    The telecommunication market is growing at a very fast pace with the evolution of new technologies to support high speed throughput and the availability of a wide range of services and applications in the mobile networks. This has led to a need for communication service providers (CSPs) to shift their focus from network elements monitoring towards services monitoring and subscribers’ satisfaction by introducing the service quality management (SQM) and the customer experience management (CEM) that require fast responses to reduce the time to find and solve network problems, to ensure efficiency and proactive maintenance, to improve the quality of service (QoS) and the quality of experience (QoE) of the subscribers. While both the SQM and the CEM demand multiple information from different interfaces, managing multiple data sources adds an extra layer of complexity with the collection of data. While several studies and researches have been conducted for data analytics in mobile networks, most of them did not consider analytics based on the four dimensions involved in the mobile networks environment which are the subscriber, the handset, the service and the network element with multiple interface correlation. The main objective of this research was to develop mobile network analytics models applied to the 3G packet-switched domain by analysing data from the radio network with the Iub interface and the core network with the Gn interface to provide a fast root cause analysis (RCA) approach considering the four dimensions involved in the mobile networks. This was achieved by using the latest computer engineering advancements which are Big Data platforms and data mining techniques through machine learning algorithms.Electrical and Mining EngineeringM. Tech. (Electrical Engineering

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet

    Controlling P2P File-Sharing Networks Traffic

    Full text link
    Since the appearance of Peer-To-Peer (P2P) file-sharing networks some time ago, many Internet users have chosen this technology to share and search programs, videos, music, documents, etc. The total number of P2P file-sharing users has been increasing and decreasing in the last decade depending on the creation or end of some well known P2P file-sharing systems. P2P file-sharing networks traffic is currently overloading some data networks and it is a major headache for network administrators because it is difficult to control this kind of traffic (mainly because some P2P file-sharing networks encrypt their messages). This paper deals with the analysis, taxonomy and characterization of eight Public P2P file-sharing networks: Gnutella, Freeenet, Soulseek, BitTorrent, Opennap, eDonkey, MP2P and FastTrack. These eight most popular networks have been selected due to their different type of working architecture. Then, we will show the amount of users, files and the size of files inside these file-sharing networks. Finally, several network configurations are presented in order to control P2P file-sharing traffic in the network.García Pineda, M.; Hammoumi, M.; Canovas Solbes, A.; Lloret, J. (2011). Controlling P2P File-Sharing Networks Traffic. Network Protocols and Algorithms. 3(4):54-92. doi:10.5296/npa.v3i4.1365S54923

    Vertaisverkkopohjainen luotettavuus multicast-sessioihin

    Get PDF
    As storage and network capacities keep growing, there is an increasing need for distributing large amounts of data through networks. At the moment, there are several alternatives trying to solve the problems of large content distribution. Unfortunately, none of them are optimal in terms of scalability and the amount of traffic generated. We introduce a protocol that tries to optimize these two factors by combining two existing solutions, IP multicast and peer-to-peer networking. IP multicast is used to minimize the traffic generated by the protocol with good scalability. However, since IP multicast is not reliable, a peer-to-peer approach is used to provide this functionality. Our experiments show that the merging of these mechanisms is feasible and provides good performance in terms distribution time and used resources.Tallennus- ja verkkokapasiteettien kasvaessa laajan tietomäärän jakamisen tarve verkkojen välityksellä kasvaa entisestään. Tällä hetkellä on olemassa useita vaihtoehtoja laajan tietomäärän levitykseen liittyvien ongelmien ratkaisemiseen. Valitettavasti yksikään niistä ei ole optimaalinen skaalautuvuuden ja tarvittavan tietoliikenteen määrän suhteen. Esittelemme protokollan, joka yrittää optimoida näitä kahta parametria yhdistämällä kaksi olemassa olevaa ratkaisua, IP multicastin ja vertaisverkot. IP multicastia käytetään minimoimaan protokollan tarvitsema liikennemäärä, mikä lisää samalla skaalautuvuutta. Koska IP multicast ei ole luotettava tiedonsiirtoprotokolla, vertaisverkkopohjaista ratkaisua käytetään lisäämään protokollaan luotettavuus. Kokeemme osoittavat, että näiden ratkaisujen yhdistäminen on järkevä ratkaisu ja mahdollistaa tiedonsiirron nopeasti ja käyttäen vähän resursseja
    corecore