27 research outputs found

    Photomediations:A Reader

    Get PDF

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Building a scalable global data processing pipeline for large astronomical photometric datasets

    Get PDF
    Astronomical photometry is the science of measuring the flux of a celestial object. Since its introduction, the CCD has been the principle method of measuring flux to calculate the apparent magnitude of an object. Each CCD image taken must go through a process of cleaning and calibration prior to its use. As the number of research telescopes increases the overall computing resources required for image processing also increases. Existing processing techniques are primarily sequential in nature, requiring increasingly powerful servers, faster disks and faster networks to process data. Existing High Performance Computing solutions involving high capacity data centres are complex in design and expensive to maintain, while providing resources primarily to high profile science projects. This research describes three distributed pipeline architectures, a virtualised cloud based IRAF, the Astronomical Compute Node (ACN), a private cloud based pipeline, and NIMBUS, a globally distributed system. The ACN pipeline processed data at a rate of 4 Terabytes per day demonstrating data compression and upload to a central cloud storage service at a rate faster than data generation. The primary contribution of this research is NIMBUS, which is rapidly scalable, resilient to failure and capable of processing CCD image data at a rate of hundreds of Terabytes per day. This pipeline is implemented using a decentralised web queue to control the compression of data, uploading of data to distributed web servers, and creating web messages to identify the location of the data. Using distributed web queue messages, images are downloaded by computing resources distributed around the globe. Rigorous experimental evidence is presented verifying the horizontal scalability of the system which has demonstrated a processing rate of 192 Terabytes per day with clear indications that higher processing rates are possible.Comment: PhD Thesis, Dublin Institute of Technolog

    Photomediations: A Reader

    Get PDF
    Photomediations: A Reader offers a radically different way of understanding photography. The concept that unites the twenty scholarly and curatorial essays collected here cuts across the traditional classification of photography as suspended between art and social practice to capture the dynamism of the photographic medium today. It also explores photography’s kinship with other media - and with us, humans, as media.The term ‘photomediations’ brings together the hybrid ontology of ‘photomedia’ and the fluid dynamism of ‘mediation’. The framework of photomediations adopts a processual, and time-based, approach to images by tracing the technological, biological, cultural, social and political flows of data that produce photographic objects

    Photomediations: A Reader

    Get PDF
    Photomediations: A Reader offers a radically different way of understanding photography. The concept of photomediations that unites the twenty scholarly and curatorial essays collected here cuts across the traditional classification of photography as suspended between art and social practice in order to capture the dynamism of the photographic medium today. It also explores photography’s kinship with other media – and with us, humans, as media. The term ‘photomediations’ brings together the hybrid ontology of ‘photomedia’ and the fluid dynamism of ‘mediation’. The framework of photomediations adopts a process- and time-based approach to images by tracing the technological, biological, cultural, social and political flows of data that produce photographic objects. Photomediations: A Reader is part of a larger editorial and curatorial project called Photomediations: An Open Book, whose goal is to redesign a coffee-table book as an online experience. A version of this Reader also exists online in an open ‘living’ format, which means it can be altered, added to, mashed-up, re-versioned and customized. The Reader is published in collaboration with Europeana Space, and in association with Jonathan Shaw, Ross Varney and Michael Wamposzyc

    Photomediations: A reader

    Get PDF
    A Reader offers a radically different way of understanding photography. The concept of photomediations that unites the twenty scholarly and curatorial essays collected here cuts across the traditional classification of photography as suspended between art and social practice in order to capture the dynamism of the photographic medium today. It also explores photography’s kinship with other media – and with us, humans, as media. The term ‘photomediations’ brings together the hybrid ontology of ‘photomedia’ and the fluid dynamism of ‘mediation’. The framework of photomediations adopts a process- and time-based approach to images by tracing the technological, biological, cultural, social and political flows of data that produce photographic objects. Photomediations: A Reader is part of a larger editorial and curatorial project called Photomediations: An Open Book, whose goal is to redesign a coffee-table book as an online experience. A version of this Reader also exists online in an open ‘living’ format, which means it can be altered, added to, mashed-up, re-versioned and customized. The Reader is published in collaboration with Europeana Space, and in association with Jonathan Shaw, Ross Varney and Michael Wamposzyc

    'The Cinematograph as an Agent of History'

    Get PDF

    Photomediations: A Reader

    Full text link
    Photomediations: A Reader offers a radically different way of understanding photography. The concept of photomediations that unites the twenty scholarly and curatorial essays collected here cuts across the traditional classification of photography as suspended between art and social practice in order to capture the dynamism of the photographic medium today. It also explores photography’s kinship with other media – and with us, humans, as media. The term ‘photomediations’ brings together the hybrid ontology of ‘photomedia’ and the fluid dynamism of ‘mediation’. The framework of photomediations adopts a process- and time-based approach to images by tracing the technological, biological, cultural, social and political flows of data that produce photographic objects. Photomediations: A Reader is part of a larger editorial and curatorial project called Photomediations: An Open Book, whose goal is to redesign a coffee-table book as an online experience. A version of this Reader also exists online in an open ‘living’ format, which means it can be altered, added to, mashed-up, re-versioned and customized. The Reader is published in collaboration with Europeana Space, and in association with Jonathan Shaw, Ross Varney and Michael Wamposzyc

    Building a Scalable Global Data Processing Pipeline for Large Astronomical Photometric Datasets

    Get PDF
    Astronomical photometry is the science of measuring the flux of a celestial object. Since its introduction in the 1970s the CCD has been the principle method of measuring flux to calculate the apparent magnitude of an object. Each CCD image taken must go through a process of cleaning and calibration prior to its use. As the number of research telescopes increases the overall computing resources required for image processing also increases. As data archives increase in size to Petabytes, the data processing challenge requires image processing approaches to evolve to continue to exceed the growing data capture rate. Existing processing techniques are primarily sequential in nature, requiring increasingly powerful servers, faster disks and faster networks to process data. Existing High Performance Computing solutions involving high capacity data centres are both complex in design and expensive to maintain, while providing resources primarily to high profile science projects. This research describes three distributed pipeline architectures, a virtualised cloud based IRAF, the Astronomical Compute Node (ACN), a private cloud based pipeline, and NIMBUS, a globally distributed system. The ACN pipeline processed data at a rate of 4 Terabytes per day demonstrating data compression and upload to a central cloud storage service at a rate faster than data generation. The primary contribution of this research however is NIMBUS, which is rapidly scalable, resilient to failure and capable of processing CCD image data at a rate of hundreds of Terabytes per day. This pipeline is implemented using a decentralised web queue to control the compression of data, uploading of data to distributed web servers, and creating web messages to identify the location of the data. Using distributed web queue messages, images are downloaded by computing resources distributed around the globe. Rigorous experimental evidence is presented verifying the horizontal scalability of the system which has demonstrated a processing rate of 192 Terabytes per day with clear indications that higher processing rates are possible
    corecore