30 research outputs found

    Metodologias para caracterização de tráfego em redes de comunicações

    Get PDF
    Tese de doutoramento em Metodologias para caracterização de tráfego em redes de comunicaçõesInternet Tra c, Internet Applications, Internet Attacks, Tra c Pro ling, Multi-Scale Analysis abstract Nowadays, the Internet can be seen as an ever-changing platform where new and di erent types of services and applications are constantly emerging. In fact, many of the existing dominant applications, such as social networks, have appeared recently, being rapidly adopted by the user community. All these new applications required the implementation of novel communication protocols that present di erent network requirements, according to the service they deploy. All this diversity and novelty has lead to an increasing need of accurately pro ling Internet users, by mapping their tra c to the originating application, in order to improve many network management tasks such as resources optimization, network performance, service personalization and security. However, accurately mapping tra c to its originating application is a di cult task due to the inherent complexity of existing network protocols and to several restrictions that prevent the analysis of the contents of the generated tra c. In fact, many technologies, such as tra c encryption, are widely deployed to assure and protect the con dentiality and integrity of communications over the Internet. On the other hand, many legal constraints also forbid the analysis of the clients' tra c in order to protect their con dentiality and privacy. Consequently, novel tra c discrimination methodologies are necessary for an accurate tra c classi cation and user pro ling. This thesis proposes several identi cation methodologies for an accurate Internet tra c pro ling while coping with the di erent mentioned restrictions and with the existing encryption techniques. By analyzing the several frequency components present in the captured tra c and inferring the presence of the di erent network and user related events, the proposed approaches are able to create a pro le for each one of the analyzed Internet applications. The use of several probabilistic models will allow the accurate association of the analyzed tra c to the corresponding application. Several enhancements will also be proposed in order to allow the identi cation of hidden illicit patterns and the real-time classi cation of captured tra c. In addition, a new network management paradigm for wired and wireless networks will be proposed. The analysis of the layer 2 tra c metrics and the di erent frequency components that are present in the captured tra c allows an e cient user pro ling in terms of the used web-application. Finally, some usage scenarios for these methodologies will be presented and discussed

    Computational inference and control of quality in multimedia services

    Get PDF
    Quality is the degree of excellence we expect of a service or a product. It is also one of the key factors that determine its value. For multimedia services, understanding the experienced quality means understanding how the delivered delity, precision and reliability correspond to the users' expectations. Yet the quality of multimedia services is inextricably linked to the underlying technology. It is developments in video recording, compression and transport as well as display technologies that enables high quality multimedia services to become ubiquitous. The constant evolution of these technologies delivers a steady increase in performance, but also a growing level of complexity. As new technologies stack on top of each other the interactions between them and their components become more intricate and obscure. In this environment optimizing the delivered quality of multimedia services becomes increasingly challenging. The factors that aect the experienced quality, or Quality of Experience (QoE), tend to have complex non-linear relationships. The subjectively perceived QoE is hard to measure directly and continuously evolves with the user's expectations. Faced with the diculty of designing an expert system for QoE management that relies on painstaking measurements and intricate heuristics, we turn to an approach based on learning or inference. The set of solutions presented in this work rely on computational intelligence techniques that do inference over the large set of signals coming from the system to deliver QoE models based on user feedback. We furthermore present solutions for inference of optimized control in systems with no guarantees for resource availability. This approach oers the opportunity to be more accurate in assessing the perceived quality, to incorporate more factors and to adapt as technology and user expectations evolve. In a similar fashion, the inferred control strategies can uncover more intricate patterns coming from the sensors and therefore implement farther-reaching decisions. Similarly to natural systems, this continuous adaptation and learning makes these systems more robust to perturbations in the environment, longer lasting accuracy and higher eciency in dealing with increased complexity. Overcoming this increasing complexity and diversity is crucial for addressing the challenges of future multimedia system. Through experiments and simulations this work demonstrates that adopting an approach of learning can improve the sub jective and objective QoE estimation, enable the implementation of ecient and scalable QoE management as well as ecient control mechanisms

    Large-scale sensor-rich video management and delivery

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Measurements and analysis of online social networks

    Get PDF
    Mención InternacionalOnline Social Networks (OSNs) have become the most used Internet applications attracting hundreds of millions active users every day. The large amount of valuable information in OSNs (not even before available) has attracted the research community to design sophisticated techniques to collect, process, interpret and apply these data into a large range of disciplines including Sociology, Marketing, Computer Science, etc. This thesis presents a series of contributions into this incipient area. First, we present a comprehensive framework to perform large scale measurements in OSNs. To this end, the tools and strategies followed to capture representative datasets are described. Furthermore, we present the lessons learned during the crawling process in order to help the reader in a future measurement campaign. Second, using the previous datasets, this thesis address two fundamental aspects that are critical in order to have a clear understanding of the Social Media ecosystem. One the one hand, we characterize the birth and grow of OSNs. In particular, we perform a deep study for a second generation OSN such as Google+ (a OSN released by Google in 2011) and compare its growth with other first generation OSNs such as Twitter. On the other hand, we characterize the information propagation in OSNs in several manners. First, we use Twitter to perform a geographical analysis of the information propagation. Furthermore, we carefully analyze the propagation information in Google+. In particular, we analyze the information propagation trees and the information propagation forests that analyze the propagation information of a piece of content through multiple trees. To the best of our knowledge any previous study has addressed this issue. Finally, the last contribution of this thesis focuses on the analysis of the load received by an OSN system such as Twitter. The conducted research lead to the following main four findings: (i) Second Generation OSNs are expected to grow much faster that the correspondent First Generation OSNs, however they struggle to get users actively engage in the system. This is the case of G+ that is growing at a impressive rate of 350K new users registered per day. However a large fraction (83%) of its users have never been active, and those that present activity are typically significantly less engaged in the system than users in Facebook or Twitter. (ii) The information propagates faster but following shorter paths in Twitter than in G+. This is a consequence of the way in which information is shown in each system. Secuentialbased systems such as Twitter force short-term conversations among their users whereas Selective-based systems such as those used in G+ or Facebook chooses which content to show to each user based on his preferences, volume of interactions with other users, etc. This helps to prolong the lifespan of conversations in the OSN.(iii) Our analysis of the geographical propagation of information in Twitter reveals that users tend to send tweets from a sole geographical location. Furthermore, the level of locality associated to the social relationships varies across countries and thus for some countries like Brazil it is more likely that the information remains local than for other countries such as Australia. (iv) Our analysis of the load of Twitter system indicates that the arrival process of tweets follows a model similar to a Gaussian with a noticeable day-night pattern. In short the work presented in this thesis allows advancing our knowledge of the Social Media ecosystem in essential directions such as the formation and growth of OSNs or the propagation of information in these systems. The important reported findings will help to develop new services on top of OSNs.Las redes sociales (OSNs por sus siglas en inglés) se han convertido en una de las aplicaciones más usadas de Internet atrayendo cientos de millones de usuarios cada día. La gran cantidad de información valiosa en las redes sociales (que antes no estaba disponible) ha llevado a la comunidad cientifica a diseñar sofisticadas tecnicas para recoger, procesar, interpretar y usar esos datos en diferentes disciplinas incluyendo sociología, marketing, informática, etc. Esta tesis presenta una serie de contribuciones en esta incipiente área. Primero, presentamos un completo marco que permite realizar medidas a gran escala de redes sociales. Con este propósito, el documento describe las herramientas y estrategias seguidas para obtener un conjunto de datos representativo. Tambien, añadimos las lecciones aprendidas durante el proceso de obtención de datos. Estas lecciones pueden ayudar al lector en una futura campaña de medidas sobre redes sociales. Segundo, usando el conjunto de datos obtenido con las herramientas descritas, esta tesis aborda dos aspectos fundamentales que son críticos para entender el ecosistema de las redes sociales. Por un lado, caracterizamos el nacimiento y crecimiento de redes sociales. En particular, llevamos a cabo un análisis en profundidad de una red social de segunda generación como Google+ (una red social lanzada por Google en 2011) y comparamos su crecimiento con otras redes sociales de primera generación como Twitter. Por otro lado caracterizamos la propagación de la información en redes sociales de diferentes maneras. Primero, usamos Twitter para llevar a cabo un analisis geográfico de la propagación de la información. También analizamos la propagación de la información en Google+. En particular, analizamos los árboles de propagación de información y los bosques de propagación de información que incluyen la información sobre la propagación de una misma pieza de contenido a traves de diferentes árboles. A nuestro saber, este es el primer estudio que aborda esta cuestión. Por último, analizamos la carga soportada por una red social como Twitter. La investigación realizada nos lleva a los siguientes 4 resultados principales: (i) Es de esperar que las redes sociales de segunda generación crezcan mucho más rápido que las correspondientes de primera generaci´on, sin embargo, estas tiene muchas dificultades para mantener los usuarios involucrados en el sistema. Este es el caso de G+ que está creciendo al impresionante ritmo de 350K nuevos usuarios registrados por dia. Sin embargo una gran fracción (83%) de ellos no ha llegado nunca a ser activos y los que presentan actividad presentan en general una actividad menos que los usuarios de Facebook o Twitter. (ii) La información se propaga más rápido pero siguiendo caminos más cortos en Twitter que en G+. Esto es una consecuencia de la manera en la que la información es mostrada en cada sistema: sistema secuenciales como en Twitter fuerzan que la información sea consumida al instante mientras que sistemas selectivos como el usado en G+ o Facebook, donde la información que se muestra depende las preferencias de los usuarios y el volumen de interacción con otros usuarios ayuda a prolongar la vida del contenido en la red social. (iii) Nuestro analisis de la propagacion geográfica de la información en Twitter revela que los usuarios suelen enviar tweets desde una única localización geográfica. Además, el nivel de geolocalización asociada a las relaciones sociales varía entre países y encontramos algunos paises, como Brasil, donde es más que la información se mantenga local que en otros como Australia. (iv) Nuestro análisis de la carga de Twitter indica que el proceso de llegada de tweets sigue un modelo gausiano con un marcado patrón día-noche. En definitiva, el trabajo presentado en este tesis permite aumentar nuestro conocimiento sobre el ecosistema de las redes sociales en direcciones esenciales como pueden ser la formación y crecimiento de redes sociales o la propagación de información en estos sistemas. Los resultados reportados ayudarán a desarrollar nuevos servicios sobre las redes sociales.Programa en Ingeniería TelemáticaPresidente: Antonio Fernández Anta; Vocal: Marco Mellia; Secretario: Francisco Valera Pinto

    A common analysis framework for simulated streaming-video networks

    Get PDF
    Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    Leveraging content properties to optimize distributed storage systems

    Get PDF
    Les fournisseurs de services de cloud computing, les réseaux sociaux et les entreprises de gestion des données ont assisté à une augmentation considérable du volume de données qu'ils reçoivent chaque jour. Toutes ces données créent des nouvelles opportunités pour étendre la connaissance humaine dans des domaines comme la santé, l'urbanisme et le comportement humain et permettent d'améliorer les services offerts comme la recherche, la recommandation, et bien d'autres. Ce n'est pas par accident que plusieurs universitaires mais aussi les médias publics se référent à notre époque comme l'époque Big Data . Mais ces énormes opportunités ne peuvent être exploitées que grâce à de meilleurs systèmes de gestion de données. D'une part, ces derniers doivent accueillir en toute sécurité ce volume énorme de données et, d'autre part, être capable de les restituer rapidement afin que les applications puissent bénéficier de leur traite- ment. Ce document se concentre sur ces deux défis relatifs aux Big Data . Dans notre étude, nous nous concentrons sur le stockage de sauvegarde (i) comme un moyen de protéger les données contre un certain nombre de facteurs qui peuvent les rendre indisponibles et (ii) sur le placement des données sur des systèmes de stockage répartis géographiquement, afin que les temps de latence perçue par l'utilisateur soient minimisés tout en utilisant les ressources de stockage et du réseau efficacement. Tout au long de notre étude, les données sont placées au centre de nos choix de conception dont nous essayons de tirer parti des propriétés de contenu à la fois pour le placement et le stockage efficace.Cloud service providers, social networks and data-management companies are witnessing a tremendous increase in the amount of data they receive every day. All this data creates new opportunities to expand human knowledge in fields like healthcare and human behavior and improve offered services like search, recommendation, and many others. It is not by accident that many academics but also public media refer to our era as the Big Data era. But these huge opportunities come with the requirement for better data management systems that, on one hand, can safely accommodate this huge and constantly increasing volume of data and, on the other, serve them in a timely and useful manner so that applications can benefit from processing them. This document focuses on the above two challenges that come with Big Data . In more detail, we study (i) backup storage systems as a means to safeguard data against a number of factors that may render them unavailable and (ii) data placement strategies on geographically distributed storage systems, with the goal to reduce the user perceived latencies and the network and storage resources are efficiently utilized. Throughout our study, data are placed in the centre of our design choices as we try to leverage content properties for both placement and efficient storage.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    Toward Automated Network Management and Operations.

    Full text link
    Network management plays a fundamental role in the operation and well-being of today's networks. Despite the best effort of existing support systems and tools, management operations in large service provider and enterprise networks remain mostly manual. Due to the larger scale of modern networks, more complex network functionalities, and higher network dynamics, human operators are increasingly short-handed. As a result, network misconfigurations are frequent, and can result in violated service-level agreements and degraded user experience. In this dissertation, we develop various tools and systems to understand, automate, augment, and evaluate network management operations. Our thesis is that by introducing formal abstractions, like deterministic finite automata, Petri-Nets and databases, we can build new support systems that systematically capture domain knowledge, automate network management operations, enforce network-wide properties to prevent misconfigurations, and simultaneously reduce manual effort. The theme for our systems is to build a knowledge plane based on the proposed abstractions, allowing network-wide reasoning and guidance for network operations. More importantly, the proposed systems require no modification to the existing Internet infrastructure and network devices, simplifying adoption. We show that our systems improve both timeliness and correctness in performing realistic and large-scale network operations. Finally, to address the current limitations and difficulty of evaluating novel network management systems, we have designed a distributed network testing platform that relies on network and device virtualization to provide realistic environments and isolation to production networks.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78837/1/chenxu_1.pd
    corecore