174 research outputs found
Modeling and Evaluation of Multisource Streaming Strategies in P2P VoD Systems
In recent years, multimedia content distribution has largely been moved to the Internet, inducing broadcasters, operators and service providers to upgrade with large expenses their infrastructures. In this context, streaming solutions that rely on user devices such as set-top boxes (STBs) to offload dedicated streaming servers are particularly appropriate. In these systems, contents are usually replicated and scattered over the network established by STBs placed at users' home, and the video-on-demand (VoD) service is provisioned through streaming sessions established among neighboring STBs following a Peer-to-Peer fashion. Up to now the majority of research works have focused on the design and optimization of content replicas mechanisms to minimize server costs. The optimization of replicas mechanisms has been typically performed either considering very crude system performance indicators or analyzing asymptotic behavior. In this work, instead, we propose an analytical model that complements previous works providing fairly accurate predictions of system performance (i.e., blocking probability). Our model turns out to be a highly scalable, flexible, and extensible tool that may be helpful both for designers and developers to efficiently predict the effect of system design choices in large scale STB-VoD system
Thermoeconomic Analysis of Csp Air-Steam Mixed Cycles with Low Water Consumption
Abstract Starting from a state of the art of CSP plants and the undergoing research in hybridization of Gas Turbine plants, the paper investigates alternative plant configurations particularly regarding the integration of CSP technology with mixed cycles because of their low water consumption and the possible use of current CSP components, assessed and compared with a through-life thermo-economic analysis
QoE in Pull Based P2P-TV Systems: Overlay Topology Design Tradeoff
AbstractâThis paper presents a systematic performance anal-ysis of pull P2P video streaming systems for live applications, providing guidelines for the design of the overlay topology and the chunk scheduling algorithm. The contribution of the paper is threefold: 1) we propose a realistic simulative model of the system that represents the effects of access bandwidth heterogeneity, latencies, peculiar characteristics of the video, while still guaranteeing good scalability properties; 2) we propose a new latency/bandwidth-aware overlay topology design strategy that improves application layer performance while reducing the underlying transport network stress; 3) we investigate the impact of chunk scheduling algorithms that explicitly exploit properties of encoded video. Results show that our proposal jointly improves the actual Quality of Experience of users and reduces the cost the transport network has to support. I
Unravelling the Impact of Temporal and Geographical Locality in Content Caching Systems
To assess the performance of caching systems, the definition of a proper
process describing the content requests generated by users is required.
Starting from the analysis of traces of YouTube video requests collected inside
operational networks, we identify the characteristics of real traffic that need
to be represented and those that instead can be safely neglected. Based on our
observations, we introduce a simple, parsimonious traffic model, named Shot
Noise Model (SNM), that allows us to capture temporal and geographical locality
of content popularity. The SNM is sufficiently simple to be effectively
employed in both analytical and scalable simulative studies of caching systems.
We demonstrate this by analytically characterizing the performance of the LRU
caching policy under the SNM, for both a single cache and a network of caches.
With respect to the standard Independent Reference Model (IRM), some
paradigmatic shifts, concerning the impact of various traffic characteristics
on cache performance, clearly emerge from our results.Comment: 14 pages, 11 Figures, 2 Appendice
Temporal Locality in Today's Content Caching: Why it Matters and How to Model it
The dimensioning of caching systems represents a difficult task in the design
of infrastructures for content distribution in the current Internet. This paper
addresses the problem of defining a realistic arrival process for the content
requests generated by users, due its critical importance for both analytical
and simulative evaluations of the performance of caching systems. First, with
the aid of YouTube traces collected inside operational residential networks, we
identify the characteristics of real traffic that need to be considered or can
be safely neglected in order to accurately predict the performance of a cache.
Second, we propose a new parsimonious traffic model, named the Shot Noise Model
(SNM), that enables users to natively capture the dynamics of content
popularity, whilst still being sufficiently simple to be employed effectively
for both analytical and scalable simulative studies of caching systems.
Finally, our results show that the SNM presents a much better solution to
account for the temporal locality observed in real traffic compared to existing
approaches.Comment: 7 pages, 7 figures, Accepted for publication in ACM Computer
Communication Revie
Method for detecting web tracking services
Method for detecting web tracking services during browsing activity performed by clients having associated client identifiers, the method comprising the steps of extracting key- value pairs contained into navigation data, looking for one-to-one correspondence between said client identifiers and the values contained in said keys and selecting the keys for which at least a client-value one-to-one correspondence for at least a predetermined number of clients is observed, said keys identifying the associated services as services performing tracking activities
Unveiling Web Fingerprinting in the Wild Via Code Mining and Machine Learning
Abstract
Fueled by advertising companies' need of accurately tracking users and their online habits, web fingerprinting practice has grown in recent years, with severe implications for users' privacy. In this paper, we design, engineer and evaluate a methodology which combines the analysis of JavaScript code and machine learning for the automatic detection of web fingerprinters.
We apply our methodology on a dataset of more than 400, 000 JavaScript files accessed by about 1, 000 volunteers during a one-month long experiment to observe adoption of fingerprinting in a real scenario. We compare approaches based on both static and dynamic code analysis to automatically detect fingerprinters and show they provide different angles complementing each other. This demonstrates that studies based on either static or dynamic code analysis provide partial view on actual fingerprinting usage in the web. To the best of our knowledge we are the first to perform this comparison with respect to fingerprinting.
Our approach achieves 94% accuracy in small decision time. With this we spot more than 840 fingerprinting services, of which 695 are unknown to popular tracker blockers. These include new actual trackers as well as services which use fingerprinting for purposes other than tracking, such as anti-fraud and bot recognition
Using Passive Measurements to Demystify Online Trackers
The Internet revolution has led to the rise of trackersâonline tracking services that shadow usersâ browsing activity. Despite trackersâ pervasiveness, few users install privacy-enhancing plug-ins
- âŠ