26 research outputs found

    Taxonomy of P2P Applications

    Get PDF
    Peer-to-peer (p2p) networks have gained immense popularity in recent years and the number of services they provide continuously rises. Where p2p-networks were formerly known as file-sharing networks, p2p is now also used for services like VoIP and IPTV. With so many different p2p applications and services the need for a taxonomy framework rises. This paper describes the available p2p applications grouped by the services they provide. A taxonomy framework is proposed to classify old and recent p2p applications based on their characteristics

    Repository Replication Using NNTP and SMTP

    Full text link
    We present the results of a feasibility study using shared, existing, network-accessible infrastructure for repository replication. We investigate how dissemination of repository contents can be ``piggybacked'' on top of existing email and Usenet traffic. Long-term persistence of the replicated repository may be achieved thanks to current policies and procedures which ensure that mail messages and news posts are retrievable for evidentiary and other legal purposes for many years after the creation date. While the preservation issues of migration and emulation are not addressed with this approach, it does provide a simple method of refreshing content with unknown partners.Comment: This revised version has 24 figures and a more detailed discussion of the experiments conducted by u

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Requirements for migration of NSSD code systems from LTSS to NLTSS

    Get PDF
    The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition

    Le partage d'informations dans les systèmes rèpartis grande èchelle

    Get PDF
    Dans les systèmes informatiques répartis, le partage de l'information est assuré par la réplication. Le maintien de la cohérence entre réplicats bute sur plusieurs problèmes, en particulier l'impossibilité du consensus. Ces difficultés sont contournées par la cohérence optimiste, qui laisse diverger les réplicats pour les réconcilier a posterior

    Reclaiming the Narrative of a Generation: The Representation of Argentina’s Last Dictatorship Through Cinema

    Get PDF
    Over 453 films have been made focusing on the topic of the last dictatorship in Argentina (1976-1983), otherwise known worldwide as the Dirty War. This time period is characterized by the vile human rights abuses committed by the military junta against those who opposed the government, leading to the disappearances of 30.000 people, many of whom left children behind. These children were often forced to grow up, giving up their childhood, due to their parents\u27 militancy. In the national story of the dictatorship, these children\u27s stories and experiences have often been forgotten. This thesis will investigate the portrayal of the last Argentine dictatorship through cinema, from the perspective of children who grew up during the dictatorship, often children of the disappeared. These films often focus on the recreation of identity, their disappeared parents, and the loss of childhood innocence. Through fiction and documentary film, these filmmakers are able to use a self-reflexive process to recreate their identity and self-represent their own stories, rather than fitting into the narratives forced upon them

    The Representation of the Last Dictatorship in Argentine Cinema

    Get PDF

    Analysis of Failure Correlation in Peer-to-Peer Storage Systems

    Get PDF
    In this paper, we propose and study analytical models of self-repairing peer-to-peer storage systems subject to failures. The failures correspond to the simultaneous loss of multiple data blocks due to the definitive loss of a peer (or following a disk crash). In the system we consider that such failures happen continuously, hence the necessity of a self-repairing mechanism (data are written once for ever). We show that, whereas stochastic models of independent failures similar to those found in the literature give a correct approximation of the average behavior of real systems, they fail to capture their variations (e.g. in bandwidth needs). We propose to solve this problem using a new stochastic model based on a fluid approximation and we give a characterization of the behavior of the system according to this model (expectation and standard deviation). This new model is validated using comparisons between its theoretical behavior and computer simulations
    corecore