498 research outputs found
Quality of experience and access network traffic management of HTTP adaptive video streaming
The thesis focuses on Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) and traffic management in access networks to improve the QoE of HAS. First, the QoE impact of adaptation parameters and time on layer was investigated with subjective crowdsourcing studies. The results were used to compute a QoE-optimal adaptation strategy for given video and network conditions. This allows video service providers to develop and benchmark improved adaptation logics for HAS. Furthermore, the thesis investigated concepts to monitor video QoE on application and network layer, which can be used by network providers in the QoE-aware traffic management cycle. Moreover, an analytic and simulative performance evaluation of QoE-aware traffic management on a bottleneck link was conducted. Finally, the thesis investigated socially-aware traffic management for HAS via Wi-Fi offloading of mobile HAS flows. A model for the distribution of public Wi-Fi hotspots and a platform for socially-aware traffic management on private home routers was presented. A simulative performance evaluation investigated the impact of Wi-Fi offloading on the QoE and energy consumption of mobile HAS.Die Doktorarbeit beschäftigt sich mit Quality of Experience (QoE) – der subjektiv empfundenen Dienstgüte – von adaptivem HTTP Videostreaming (HAS) und mit Verkehrsmanagement, das in Zugangsnetzwerken eingesetzt werden kann, um die QoE des adaptiven Videostreamings zu verbessern. Zuerst wurde der Einfluss von Adaptionsparameters und der Zeit pro Qualitätsstufe auf die QoE von adaptivem Videostreaming mittels subjektiver Crowdsourcingstudien untersucht. Die Ergebnisse wurden benutzt, um die QoE-optimale Adaptionsstrategie für gegebene Videos und Netzwerkbedingungen zu berechnen. Dies ermöglicht Dienstanbietern von Videostreaming verbesserte Adaptionsstrategien für adaptives Videostreaming zu entwerfen und zu benchmarken. Weiterhin untersuchte die Arbeit Konzepte zum Überwachen von QoE von Videostreaming in der Applikation und im Netzwerk, die von Netzwerkbetreibern im Kreislauf des QoE-bewussten Verkehrsmanagements eingesetzt werden können. Außerdem wurde eine analytische und simulative Leistungsbewertung von QoE-bewusstem Verkehrsmanagement auf einer Engpassverbindung durchgeführt. Schließlich untersuchte diese Arbeit sozialbewusstes Verkehrsmanagement für adaptives Videostreaming mittels WLAN Offloading, also dem Auslagern von mobilen Videoflüssen über WLAN Netzwerke. Es wurde ein Modell für die Verteilung von öffentlichen WLAN Zugangspunkte und eine Plattform für sozialbewusstes Verkehrsmanagement auf privaten, häuslichen WLAN Routern vorgestellt. Abschließend untersuchte eine simulative Leistungsbewertung den Einfluss von WLAN Offloading auf die QoE und den Energieverbrauch von mobilem adaptivem Videostreaming
The Informal Screen Media Economy of Ukraine
This research explores informal film translation (voice over and subtitling) and distribution (pirate streaming and torrenting) practices in Ukraine, which together comprise what I call the informal screen media economy of Ukraine. This study addresses wider issues of debate around the distinct reasons media piracy exists in non-Western economies. There is already a considerable body of research on piracy outside of the traditional anti-piracy discourse, one that recognises that informal media are not all unequivocally destructive nor that they are necessarily marginal, particularly in non-Western countries. Yet, there remain gaps in the range of geographies and specific types of pirate practices being studied. Furthermore, academics often insufficiently address the intricate conditions of the context within which a given pirate activity is undertaken. Finally, whereas many researchers talk about pirates, considerably fewer talk to them. This project sets out to address these gaps.
Specifically, I examine the distinct practicalities of the informal screen media practices in Ukraine through netnographic observations of pirate sites and in-depth interviews with the Ukrainian informal screen media practitioners. I explore their notably diverse motivations for engaging in these activities and how they negotiate their practices with the complex economic, cultural, and regulatory context of Ukraine. I find that, contrary to common perceptions, the Ukrainian pirates do not oppose the copyright law but operate largely within and around it. A more important factor in piracy in Ukraine instead is the economics of the Ukrainian language. This is reflected in the language exclusivity inherent to most Ukrainian pirate distribution platforms as well as in the motives of some informal translators, for whom their practice is a form of language activism. Overall, I argue for a more holistic approach to researching the informal space of the media economy, especially in non-Western contexts, one that recognises the heterogeneity of this space and explores accordingly intricate factors behind its existence. In addition, this project offers a methodological contribution by providing a detailed reflection on the use of ethnographic methods to study a pirate economy in a non-Western, non-anglophone country
Fringe platforms: An analysis of contesting alternatives to the mainstream social media platforms in a platformized public sphere
Social media companies are ubiquitous in our social lives and public debate. They provide spaces for discussion and grant us access to journalism. In his 1962 Strukturwandel der Öffentlichkeit, Jürgen Habermas described how the public sphere was transformed through the introduction of modern communication systems. With the advent of social media platforms, the public sphere has transformed again through ‘platformization’. Platformization is the process by which Big Tech companies infiltrate infrastructures, economic processes and governmental frameworks of entire public sectors, structuring them around their own practices and logics. This dissertation studies the contemporary platformized public sphere, not by focusing at the center of the public sphere, but by looking at the edges of the platform ecology, where radical or counter platform technology are situated. I do this through the concept of ‘fringe platforms’, which are defined as; alternative platform services that are established as an explicit critique of the ideological premises and practices of mainstream platform services, which strive to cause a shift in the norms of the platform ecology they contest by offering an ideologically different technology. One such platform is alt-right microblogging service Gab.com, which was subjected to a process of 'deplatformization' in 2018, when its user base was implicated in white supremacist terrorism. Deplatformization refers to tech companies’ efforts to reduce toxic content by pushing back controversial platforms and their communities to the edges of the ecosystem by denying them access to the basic infrastructural services required to function online. By studying Gab through three case studies this dissertation poses the following research questions: What is the role of fringe social media platforms in a platformized public sphere? What hierarchies and shifts in power do they signify? And how can they inform us about the platform ecosystem? In the first case study, I explore Gab as an ecosystem, and conclude that the study of fringe platforms entails a more explicit role in the analyses for a platform’s self-positioning and narrative, as well as a shift in focus from a platform as an ecosystem towards a lens that takes into account the (infra)structural consequences of a platform as part of an ecosystem of services. In the second and third case study, I oblige to this conclusion and examine Gab as part of the platform ecosystem, shifting the analytical lens to the power dynamics and infrastructures of the platformized public sphere. There, I conclude that deplatformization demonstrates how the power and influence of private technology platforms reaches far beyond their own boundaries, which reveals platform power as infrastructural and rule-setting power. In the conclusion chapter, I argue that the aforementioned fringe lens is useful, not only for the analysis of fringe platforms, but also for the platformized public sphere as a whole, as it makes the structures and infrastructures of the platformized public sphere visible; highlights power and discourse; focuses on dynamics, conflict and breakdown; and incorporates the dominant and democratically productive as well as the marginal and illiberal, in its analyses
Web 3.0: The Future of Internet
With the rapid growth of the Internet, human daily life has become deeply
bound to the Internet. To take advantage of massive amounts of data and
information on the internet, the Web architecture is continuously being
reinvented and upgraded. From the static informative characteristics of Web 1.0
to the dynamic interactive features of Web 2.0, scholars and engineers have
worked hard to make the internet world more open, inclusive, and equal. Indeed,
the next generation of Web evolution (i.e., Web 3.0) is already coming and
shaping our lives. Web 3.0 is a decentralized Web architecture that is more
intelligent and safer than before. The risks and ruin posed by monopolists or
criminals will be greatly reduced by a complete reconstruction of the Internet
and IT infrastructure. In a word, Web 3.0 is capable of addressing web data
ownership according to distributed technology. It will optimize the internet
world from the perspectives of economy, culture, and technology. Then it
promotes novel content production methods, organizational structures, and
economic forms. However, Web 3.0 is not mature and is now being disputed.
Herein, this paper presents a comprehensive survey of Web 3.0, with a focus on
current technologies, challenges, opportunities, and outlook. This article
first introduces a brief overview of the history of World Wide Web as well as
several differences among Web 1.0, Web 2.0, Web 3.0, and Web3. Then, some
technical implementations of Web 3.0 are illustrated in detail. We discuss the
revolution and benefits that Web 3.0 brings. Finally, we explore several
challenges and issues in this promising area.Comment: ACM Web Conference 202
Evaluating Copyright Protection in the Data-Driven Era: Centering on Motion Picture\u27s Past and Future
Since the 1910s, Hollywood has measured audience preferences with rough industry-created methods. In the 1940s, scientific audience research led by George Gallup started to conduct film audience surveys with traditional statistical and psychological methods. However, the quantity, quality, and speed were limited. Things dramatically changed in the internet age. The prevalence of digital data increases the instantaneousness, convenience, width, and depth of collecting audience and content data. Advanced data and AI technologies have also allowed machines to provide filmmakers with ideas or even make human-like expressions. This brings new copyright challenges in the data-driven era.
Massive amounts of text and data are the premise of text and data mining (TDM), as well as the admission ticket to access machine learning technologies. Given the high and uncertain copyright violation risks in the data-driven creation process, whoever controls the copyrighted film materials can monopolize the data and AI technologies to create motion pictures in the data-driven era. Considering that copyright shall not be the gatekeeper to new technological uses that do not impair the original uses of copyrighted works in the existing markets, this study proposes to create a TDM and model training limitations or exceptions to copyrights and recommends the Singapore legislative model.
Motion pictures, as public entertainment media, have inherently limited creative choices. Identifying data-driven works’ human original expression components is also challenging. This study proposes establishing a voluntarily negotiated license institution backed up by a compulsory license to enable other filmmakers to reuse film materials in new motion pictures. The film material’s degree of human original authorship certified by film artists’ guilds shall be a crucial factor in deciding the compulsory license’s royalty rate and terms to encourage retaining human artists. This study argues that international and domestic policymakers should enjoy broad discretion to qualify data-driven work’s copyright protection because data-driven work is a new category of work. It would be too late to wait until ubiquitous data-driven works block human creative freedom and floods of data-driven work copyright litigations overwhelm the judicial systems
Analysing the Design of Privacy-Preserving Data-Sharing Architecture
Privacy has become an essential software quality to consider in a software system. Privacy practices should be adopted from the early stages of the system design to safeguard personal data from privacy violations. Privacy patterns are proposed in industry and academia as reusable design solutions to address different privacy issues.
However, the diverse types and granularity of the patterns lead to difficulty for the practitioner to select and adopt them in the architecture. First, the fragmented information about the system actors in the patterns does not align with the regulatory entities and interactions between them. Second, these privacy patterns lack architectural perspectives that could help weave patterns into concrete software designs. Third, the consequences of applying the patterns have not covered the impacts on software quality attributes.
This thesis aims to provide guidance to software architects and practitioners for considering and applying privacy patterns in their design, by adding new perspectives to the existing patterns. First, the research provides an analysis of the relationships between regulatory entities and their responsibility in adopting the patterns in a software design. Then, the research reports studies that were conducted using architectural-level modelling-based approaches, to analyse the architectural views of privacy patterns. The analyses aim to improve understanding of how privacy patterns are applied in software designs and how such a design affects software quality attributes, including privacy, performance, and modifiability.
Finally, in an effort to harmonise and unite the extended view of privacy patterns that have a close relation to system architecture, this research proposes an enhanced pattern catalogue and a systematic privacy-by-design (PbD) pattern-selection model that aims to aid and guide software architects in pattern selection during software design. The enhanced pattern catalogue offers consolidated information on the extended view of privacy patterns. The selection model provides a structured way for the practitioner to know when and how to use the pattern catalogue in the system-design process. Two industry case studies are used to evaluate the proposed pattern catalogue and selection model. The findings demonstrate how the proposed frameworks are applicable to different types of data-sharing software systems and their usability in supporting pattern selection decisions in the privacy design
Survey of Models and Architectures to Ensure Linked Data Access
Mobile Access to the Web of Data is currently a real challenge in developing countries, mainly characterized by limited Internet connectivity and high penetration of mobile devices with the limited resources (such as cache and memory). In this paper, we survey and compare proposed solutions (such as models and architectures) that could contribute to solving this problem of mobile access to the Web of Data with intermittent Internet access. These solutions are discussed in relation to the underlying network architectures and data models considered. We present a conceptual study of peer-to-peer solutions based on gossip protocols dedicated to design the connected overlay networks. In addition, we provide a detailed analysis of client-server and data replication systems generally designed to ensure the local availability of data on the system. We conclude with some recommendations to achieve a connected architecture that provides mobile contributors with local access to the Web of data
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
- …