163 research outputs found

    Distributed Spatial Data Sharing: a new era in sharing spatial data

    Get PDF
    The advancements in information and communications technology, including the widespread adoption of GPS-based sensors, improvements in computational data processing, and satellite imagery, have resulted in new data sources, stakeholders, and methods of producing, using, and sharing spatial data. Daily, vast amounts of data are produced by individuals interacting with digital content and through automated and semi-automated sensors deployed across the environment. A growing portion of this information contains geographic information directly or indirectly embedded within it. The widespread use of automated smart sensors and an increased variety of georeferenced media resulted in new individual data collectors. This raises a new set of social concerns around individual geopricacy and data ownership. These changes require new approaches to managing, sharing, and processing geographic data. With the appearance of distributed data-sharing technologies, some of these challenges may be addressed. This can be achieved by moving from centralized control and ownership of the data to a more distributed system. In such a system, the individuals are responsible for gathering and controlling access and storing data. Stepping into the new area of distributed spatial data sharing needs preparations, including developing tools and algorithms to work with spatial data in this new environment efficiently. Peer-to-peer (P2P) networks have become very popular for storing and sharing information in a decentralized approach. However, these networks lack the methods to process spatio-temporal queries. During the first chapter of this research, we propose a new spatio-temporal multi-level tree structure, Distributed Spatio-Temporal Tree (DSTree), which aims to address this problem. DSTree is capable of performing a range of spatio-temporal queries. We also propose a framework that uses blockchain to share a DSTree on the distributed network, and each user can replicate, query, or update it. Next, we proposed a dynamic k-anonymity algorithm to address geoprivacy concerns in distributed platforms. Individual dynamic control of geoprivacy is one of the primary purposes of the proposed framework introduced in this research. Sharing data within and between organizations can be enhanced by greater trust and transparency offered by distributed or decentralized technologies. Rather than depending on a central authority to manage geographic data, a decentralized framework would provide a fine-grained and transparent sharing capability. Users can also control the precision of shared spatial data with others. They are not limited to third-party algorithms to decide their privacy level and are also not limited to the binary levels of location sharing. As mentioned earlier, individuals and communities can benefit from distributed spatial data sharing. During the last chapter of this work, we develop an image-sharing platform, aka harvester safety application, for the Kakisa indigenous community in northern Canada. During this project, we investigate the potential of using a Distributed Spatial Data sharing (DSDS) infrastructure for small-scale data-sharing needs in indigenous communities. We explored the potential use case and challenges and proposed a DSDS architecture to allow users in small communities to share and query their data using DSDS. Looking at the current availability of distributed tools, the sustainable development of such applications needs accessible technology. We need easy-to-use tools to use distributed technologies on community-scale SDS. In conclusion, distributed technology is in its early stages and requires easy-to-use tools/methods and algorithms to handle, share and query geographic information. Once developed, it will be possible to contrast DSDS against other data systems and thereby evaluate the practical benefit of such systems. A distributed data-sharing platform needs a standard framework to share data between different entities. Just like the first decades of the appearance of the web, these tools need regulations and standards. Such can benefit individuals and small communities in the current chaotic spatial data-sharing environment controlled by the central bodies

    Copyright Content Moderation in the EU:An Interdisciplinary Mapping Analysis

    Get PDF
    This report is part of the reCreating Europe project and describes the results of the research carried out in the context of Work Package 6 on the mapping of the EU legal framework and intermediaries’ practices on copyright content moderation. The Report addresses the following main research question: how can we map the impact on access to culture in the Digital Single Market of content moderation of copyright-protected content on online platforms? The report consists of six chapters. After a brief introduction in Chapter 1, Chapter 2 develops a conceptual framework and interdisciplinary methodological approach to examine copyright content moderation on online platforms and its potential impact on access to culture. The analysis clarifies our terminology, distinguishes between platform “governance” and “regulation”, elucidates the concept of “online platform”, and positions our research in the context of regulation “of”, “by” and “on” platforms. Chapter 3 carries out a legal mapping of the topic of this report at EU level. Our focus here is the legal regime of art. 17 of the Copyright in the Digital Single Market Directive (CDSMD). We first provide some context on the legal regime that precedes the CDSMD. We then briefly explain the legislative process leading to the adoption of the Directive, followed by a snapshot of the legal regime, including remarks relating to the European Commission’s stakeholder consultations and Guidance on art. 17, and the action for annulment of art. 17 initiated by the Polish government in Case C-401/19. This is followed by a detailed analysis of art. 17, with an emphasis on its liability regime and rules with implication for copyright content moderation by OCSSPs. The chapter closes with an examination of the interface of art. 17 CDSMD with the Digital Services Act (DSA), which final version was agreed in the concluding stages of this Report. Chapter 4 provides an analysis of the findings of our comparative legal research at national level. The findings are based on two legal questionnaires carried out with national experts in ten Member States, before and after the implementation due date of the CDSMD. The phase one questionnaire focused on the status quo in this field of law. The phase two questionnaire was dedicated to the national implementations of art. 17 CDSMD, and the consequences of such implementation. The collected data highlighted both the similarities and, in some cases, remarkable differences in the Member States’ legal systems both before and after art. 17 CDSMD, which cast doubt on the effectiveness of the provision for EU harmonisation in this field. Chapter 5 uses qualitative methods to map out the copyright content moderation structures of key social media platforms, with a focus on their Terms and Conditions and automated systems. The chapter first presents empirical findings regarding which kinds of public documents and rules have been adopted by a sample of 15 platforms, categorised as mainstream (Facebook, YouTube, Instagram, Twitter, SoundCloud), alternative (Diaspora, Mastodon, DTube, Pixelfed, Audius) and specialised (Vimeo, Twitch, Pornhub, FanFiction, Dribble). It also provides an in-depth longitudinal examination of how the copyright content moderation rules of six case studies (Facebook, SoundCloud, PornHub, FanFiction, Diaspora, and DTube) changed since these platforms’ launch, as well as a comparison between three automated copyright content moderation systems: Content ID (YouTube), Audible Magic (several platforms). and Rights Manager (Meta/Facebook), with a thorough description of the last one. Then, the chapter suggests that two dual processes seem to mark the evolution of platforms’ copyright content moderation structures: (1) over time, these structures became more complex (more rules, spread on more types of documents), and opaquer (harder to access and understand); and (2) the control over copyright content moderation tilted strongly towards platforms themselves, a development that helped concentrate power in the hands of both platforms and large rights holders, at the expense of ordinary users and creators. While not equally true to all platforms we analysed, complexification/opacification, and platformisation/concentration seem to be some of the clearest developments in the recent history of private regulation of copyright content moderation. Finally, Chapter 6 concludes with a summary of our analysis and recommendations for future policy actions

    Cloud-edge hybrid applications

    Get PDF
    Many modern applications are designed to provide interactions among users, including multi- user games, social networks and collaborative tools. Users expect application response time to be in the order of milliseconds, to foster interaction and interactivity. The design of these applications typically adopts a client-server model, where all interac- tions are mediated by a centralized component. This approach introduces availability and fault- tolerance issues, which can be mitigated by replicating the server component, and even relying on geo-replicated solutions in cloud computing infrastructures. Even in this case, the client-server communication model leads to unnecessary latency penalties for geographically close clients and high operational costs for the application provider. This dissertation proposes a cloud-edge hybrid model with secure and ecient propagation and consistency mechanisms. This model combines client-side replication and client-to-client propagation for providing low latency and minimizing the dependency on the server infras- tructure, fostering availability and fault tolerance. To realize this model, this works makes the following key contributions. First, the cloud-edge hybrid model is materialized by a system design where clients maintain replicas of the data and synchronize in a peer-to-peer fashion, and servers are used to assist clients’ operation. We study how to bring most of the application logic to the client-side, us- ing the centralized service primarily for durability, access control, discovery, and overcoming internetwork limitations. Second, we dene protocols for weakly consistent data replication, including a novel CRDT model (∆-CRDTs). We provide a study on partial replication, exploring the challenges and fundamental limitations in providing causal consistency, and the diculty in supporting client- side replicas due to their ephemeral nature. Third, we study how client misbehaviour can impact the guarantees of causal consistency. We propose new secure weak consistency models for insecure settings, and algorithms to enforce such consistency models. The experimental evaluation of our contributions have shown their specic benets and limitations compared with the state-of-the-art. In general, the cloud-edge hybrid model leads to faster application response times, lower client-to-client latency, higher system scalability as fewer clients need to connect to servers at the same time, the possibility to work oine or disconnected from the server, and reduced server bandwidth usage. In summary, we propose a hybrid of cloud-and-edge which provides lower user-to-user la- tency, availability under server disconnections, and improved server scalability – while being ecient, reliable, and secure.Muitas aplicaçÔes modernas sĂŁo criadas para fornecer interaçÔes entre utilizadores, incluindo jogos multiutilizador, redes sociais e ferramentas colaborativas. Os utilizadores esperam que o tempo de resposta nas aplicaçÔes seja da ordem de milissegundos, promovendo a interação e interatividade. A arquitetura dessas aplicaçÔes normalmente adota um modelo cliente-servidor, onde todas as interaçÔes sĂŁo mediadas por um componente centralizado. Essa abordagem apresenta problemas de disponibilidade e tolerĂąncia a falhas, que podem ser mitigadas com replicação no componente do servidor, atĂ© com a utilização de soluçÔes replicadas geogracamente em infraestruturas de computação na nuvem. Mesmo neste caso, o modelo de comunicação cliente-servidor leva a penalidades de latĂȘncia desnecessĂĄrias para clientes geogracamente prĂłximos e altos custos operacionais para o provedor das aplicaçÔes. Esta dissertação propĂ”e um modelo hĂ­brido cloud-edge com mecanismos seguros e ecientes de propagação e consistĂȘncia. Esse modelo combina replicação do lado do cliente e propagação de cliente para cliente para fornecer baixa latĂȘncia e minimizar a dependĂȘncia na infraestrutura do servidor, promovendo a disponibilidade e tolerĂąncia a falhas. Para realizar este modelo, este trabalho faz as seguintes contribuiçÔes principais. Primeiro, o modelo hĂ­brido cloud-edge Ă© materializado por uma arquitetura do sistema em que os clientes mantĂȘm rĂ©plicas dos dados e sincronizam de maneira ponto a ponto e onde os servidores sĂŁo usados para auxiliar na operação dos clientes. Estudamos como trazer a maior parte da lĂłgica das aplicaçÔes para o lado do cliente, usando o serviço centralizado principalmente para durabilidade, controlo de acesso, descoberta e superação das limitaçÔes inter-rede. Em segundo lugar, denimos protocolos para replicação de dados fracamente consistentes, incluindo um novo modelo de CRDTs (∆-CRDTs). Fornecemos um estudo sobre replicação parcial, explorando os desaos e limitaçÔes fundamentais em fornecer consistĂȘncia causal e a diculdade em suportar rĂ©plicas do lado do cliente devido Ă  sua natureza efĂ©mera. Terceiro, estudamos como o mau comportamento da parte do cliente pode afetar as garantias da consistĂȘncia causal. Propomos novos modelos seguros de consistĂȘncia fraca para conguraçÔes inseguras e algoritmos para impor tais modelos de consistĂȘncia. A avaliação experimental das nossas contribuiçÔes mostrou os benefĂ­cios e limitaçÔes em comparação com o estado da arte. Em geral, o modelo hĂ­brido cloud-edge leva a tempos de resposta nas aplicaçÔes mais rĂĄpidos, a uma menor latĂȘncia de cliente para cliente e Ă  possibilidade de trabalhar oine ou desconectado do servidor. Adicionalmente, obtemos uma maior escalabilidade do sistema, visto que menos clientes precisam de estar conectados aos servidores ao mesmo tempo e devido Ă  redução na utilização da largura de banda no servidor. Em resumo, propomos um modelo hĂ­brido entre a orla (edge) e a nuvem (cloud) que fornece menor latĂȘncia entre utilizadores, disponibilidade durante desconexĂ”es do servidor e uma melhor escalabilidade do servidor – ao mesmo tempo que Ă© eciente, conĂĄvel e seguro

    Music and Digital Media

    Get PDF
    Anthropology has neglected the study of music. Music and Digital Media shows how and why this should be redressed. It does so by enabling music to expand the horizons of digital anthropology, demonstrating how the field can build interdisciplinary links to music and sound studies, digital/media studies, and science and technology studies. Music and Digital Media is the first comparative ethnographic study of the impact of digital media on music worldwide. It offers a radical and lucid new theoretical framework for understanding digital media through music, showing that music is today where the promises and problems of the digital assume clamouring audibility. The book contains ten chapters, eight of which present comprehensive original ethnographies; they are bookended by an authoritative introduction and a comparative postlude. Five chapters address popular, folk, art and crossover musics in the global South and North, including Kenya, Argentina, India, Canada and the UK. Three chapters bring the digital experimentally to the fore, presenting pioneering ethnographies of anextra-legal peer-to-peer site and the streaming platform Spotify, a series of prominent internet-mediated music genres, and the first ethnography of a global software package, the interactive music platform Max. The book is unique in bringing ethnographic research on popular, folk, art and crossover musics from the global North and South into a comparative framework on a large scale, and creates an innovative new paradigm for comparative anthropology. It shows how music enlarges anthropology while demanding to be understood with reference to classic themes of anthropological theory. Praise for Music and Digital Media ‘Music and Digital Media is a groundbreaking update to our understandings of sound, media, digitization, and music. Truly transdisciplinary and transnational in scope, it innovates methodologically through new models for collaboration, multi-sited ethnography, and comparative work. It also offers an important defense of—and advancement of—theories of mediation.’ Jonathan Sterne, Communication Studies and Art History, McGill University 'Music and Digital Media is a nuanced exploration of the burgeoning digital music scene across both the global North and the global South. Ethnographically rich and theoretically sophisticated, this collection will become the new standard for this field.' Anna Tsing, Anthropology, University of California at Santa Cruz 'The global drama of music's digitisation elicits extreme responses – from catastrophe to piratical opportunism – but between them lie more nuanced perspectives. This timely, absolutely necessary collection applies anthropological understanding to a deliriously immersive field, bringing welcome clarity to complex processes whose impact is felt far beyond what we call music.' David Toop, London College of Communication, musician and writer ‘Spanning continents and academic disciplines, the rich ethnographies contained in Music and Digital Media makes it obligatory reading for anyone wishing to understand the complex, contradictory, and momentous effects that digitization is having on musical cultures.’ Eric Drott, Music, University of Texas, Austin ‘This superb collection, with an authoritative overview as its introduction, represents the state of the art in studies of the digitalisation of music. It is also a testament to what anthropology at its reflexive best can offer the rest of the social sciences and humanities.’ David Hesmondhalgh, Media and Communication, University of Leeds ‘This exciting volume forges new ground in the study of local conditions, institutions, and sounds of digital music in the Global South and North. The book’s planetary scope and its commitment to the “messiness” of ethnographic sites and concepts amplifies emergent configurations and meanings of music, the digital, and the aesthetic.’ Marina Peterson, Anthropology, University of Texas, Austi

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    Music and Digital Media: A planetary anthropology

    Get PDF
    Anthropology has neglected the study of music. Music and Digital Media shows how and why this should be redressed. It does so by enabling music to expand the horizons of digital anthropology, demonstrating how the field can build interdisciplinary links to music and sound studies, digital/media studies, and science and technology studies. Music and Digital Media is the first comparative ethnographic study of the impact of digital media on music worldwide. It offers a radical and lucid new theoretical framework for understanding digital media through music, showing that music is today where the promises and problems of the digital assume clamouring audibility. The book contains ten chapters, eight of which present comprehensive original ethnographies; they are bookended by an authoritative introduction and a comparative postlude. Five chapters address popular, folk, art and crossover musics in the global South and North, including Kenya, Argentina, India, Canada and the UK. Three chapters bring the digital experimentally to the fore, presenting pioneering ethnographies of an extra-legal peer-to-peer site and the streaming platform Spotify, a series of prominent internet-mediated music genres, and the first ethnography of a global software package, the interactive music platform Max. The book is unique in bringing ethnographic research on popular, folk, art and crossover musics from the global North and South into a comparative framework on a large scale, and creates an innovative new paradigm for comparative anthropology. It shows how music enlarges anthropology while demanding to be understood with reference to classic themes of anthropological theory

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    The Graduated Response

    Get PDF

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformĂĄticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o inĂ­cio da nova dĂ©cada, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnolĂłgica que vĂȘ o seu nĂșmero de utilizadores a aumentar, assim como a exigĂȘncia, por parte das aplicaçÔes, de novos padrĂ”es de Qualidade de Serviço (QoS), como visto em domĂ­nios de transmissĂŁo de conteĂșdo multimĂ©dia em tempo real e em experiĂȘncias de realidade virtual. Face ao aumento de trĂĄfego e aos padrĂ”es de exigĂȘncia aplicacional mais restritos, Ă© necessĂĄrio melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral Ă© como aplicaçÔes utilizam os seus recursos da rede, onde muitas destas nĂŁo tĂȘm consideração pelas preferĂȘncias dos ISPs, como exemplificado pela sua falta de esforço em localizar trĂĄfego, onde o contrĂĄrio seria preferĂ­vel por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicaçÔes, em resolver este problema, nĂŁo serĂĄ bem-sucedida se as preferĂȘncias administrativas nĂŁo forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenĂĄrio mutuamente benĂ©fico. O foco principal desta tese Ă© o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizaçÔes para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritĂĄrias disponibilizam recursos com informação de estado de rede, e preferĂȘncias administrativas. A partilha de conhecimento infraestrutural Ă© feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visĂŁo da histĂłrica disputa entre aplicaçÔes e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiĂȘncia do sistema numa simulação, quando comparado com alternativas clĂĄssicas

    Optimizing whole programs for code size

    Get PDF
    Reducing code size has benefits at every scale. It can help fit embedded software into strictly limited storage space, reduce mobile app download time, and improve the cache usage of supercomputer software. There are many optimizations available that reduce code size, but research has often neglected this goal in favor of speed, and some recently developed compiler techniques have not yet been applied for size reduction. My work shows that newly practical compiler techniques can be used to develop novel code size optimizations. These optimizations complement each other, and other existing methods, in minimizing code size. I introduce two new optimizations, Guided Linking and Semantic Outlining, and also present a comparison framework for code size reduction methods that explains how and when my new optimizations work well with other, existing optimizations. Guided Linking builds on recent work that optimizes multiple programs and shared libraries together. It links an arbitrary set of programs and libraries into a single module. The module can then be optimized with arbitrary existing link-time optimizations, without changes to the optimization code, allowing them to work across program and library boundaries; for example, a library function can be inlined into a plugin module. I also demonstrate that deduplicating functions in the merged module can significantly reduce code size in some cases. Guided Linking ensures that all necessary dynamic linker behavior, such as plugin loading, still works correctly; it relies on developer-provided constraints to indicate which behavior must be preserved. Guided Linking can achieve a 13% to 57% size reduction in some scenarios, and can speed up the Python interpreter by 9%. Semantic Outlining relies on the use of automated theorem provers to check semantic equivalence of pieces of code, which has only recently become feasible to perform at scale. It extends outlining, an established technique for deduplicating structurally equivalent pieces of code, to work on code pieces that are semantically equivalent even if their structure is completely different. My comparison framework covers a large number of different code size reduction methods from the literature, in addition to my new methods. It describes several different aspects by which each method can be compared; in particular, there are multiple types of redundancy in program code that can be exploited to reduce code size, and methods that exploit different types of redundancy are likely to work well in combination with each other. This explains why Guided Linking and Semantic Outlining can be effective when used together, along with some kinds of existing optimizations
    • 

    corecore