3,665 research outputs found

    Analysis of X.500 Distributed Directory Refresh Strategies

    Full text link
    Distributed database directory refresh strategies, commonly recommended for the X.500 standard, are defined and analytically modeled for variations on push/pull and total/differential under idealistic asynchronous control conditions. The models are implemented in a HyperCard-based tool called DirMod (for "directory model"). Experimental test results show important elapsed time performance tradeoff among the different strategies, and live test data contribute to the verification of the models.http://deepblue.lib.umich.edu/bitstream/2027.42/107872/1/citi-tr-90-6.pd

    Security features using a distributed file system

    Get PDF
    Tese de mestrado em Segurnaça Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2011Informação sensível como por exemplo dados provenientes the firewalls ou sistemas de detecção de intrusões, é preciso que seja armazenada durante longos períodos de tempo por razões legais ou para fins de análise forense. Com o crescimento das fontes geradores deste tipo de dados dentro de uma empresa, torna-se imperioso encontrar uma solução que cumpra os requisitos de escalabilidade, segurança, disponibilidade, performance e baixa manutenção com custos controlados. Na sequência desta necessidade, este projecto visa fazer uma análise sobre vários sistemas de ficheiros distribuídos por forma a encontrar uma solução que responda aos requisitos de performance e segurança de uma aplicação interna da Portugal Telecom. Para validar a solução, o projecto inclui a concepção de um protótipo que pretende simular as condições de execução dessa aplicação.Sensitive information such as firewall logs or data from intrusion detection systems, has to be stored for long periods of time for legal reasons or for later forensic analysis. With the growth of the sources generating this type of data within a company, it is imperative to find a solution that meets the requirements of scalability, security, availability, performance and low maintenance while keeping the costs under control. Following this need, this project aims to make an analysis of several distributed file systems in order to find a solution that meets both the performance and security requirements of an internal application of Portugal Telecom. To validate the solution, the project includes the design of a prototype that aims to simulate the execution environment of that application

    Directory-Enabled Networking Design Reference

    Full text link

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)

    Building a flexible web caching system.

    Get PDF
    Web caching is a technology that has demonstrated to improve traffic on the Internet. To find out how to implement a Web caching architecture that assures improvements is not an easy task. The problem is more difficult when we are interested in deploying a distributed and cooperative Web caching system. We have found that some cooperative Web caching architectures could be unviable when changes on the network environment appear. This situation suggests that a cooperative Web caching system could get worst access to Web objects. However in this paper we present an architecture that combines the best of several Web caching configurations that we have previously analyzed. Our architecture gives basic ideas for implementing a cooperative Web caching system using groups of HTTP proxy servers which can improve access to remote Web objects regardless of the changes that might occur on the network environment (changes that could produce modifications in Web object validation policies and/or types of caching communication).Peer Reviewe

    Naming, Migration, and Replication for NFSv4

    Full text link
    In this paper, we discuss a global name space for NFSv4 and mechanisms for transparent migration and replication. By convention, any file or directory name beginning with /nfs on an NFS client is part of this shared global name space. Our system supports file system migration and replication through DNS resolution, provides directory migration and replication using built-in NFSv4 mechanisms, and supports read/write replication with precise consistency guarantees, small performance penalty, and good scaling. We implement these features with small extensions to the published NFSv4 protocol, and demonstrate a practical way to enhance network transparency and administerability of NFSv4 in wide area networks.http://deepblue.lib.umich.edu/bitstream/2027.42/107939/1/citi-tr-06-1.pd

    Optimizations for Energy-Aware, High-Performance and Reliable Distributed Storage Systems

    Get PDF
    With the decreasing cost and wide-spread use of commodity hard drives, it has become possible to create very large-scale storage systems with less expense. However, as we approach exabyte-scale storage systems, maintaining important features such as energy-efficiency, performance, reliability and usability became increasingly difficult. Despite the decreasing cost of storage systems, the energy consumption of these systems still needs to be addressed in order to retain cost-effectiveness. Any improvements in a storage system can be outweighed by high energy costs. On the other hand, large-scale storage systems can benefit more from the object storage features for improved performance and usability. One area of concern is metadata performance bottleneck of applications reading large directories or creating a large number of files. Similarly, computation on big data where data needs to be transferred between compute and storage clusters adversely affects I/O performance. As the storage systems become more complex and larger, transferring data between remote compute and storage tiers becomes impractical. Furthermore, storage systems implement reliability typically at the file system or client level. This approach might not always be practical in terms of performance. Lastly, object storage features are usually tailored to specific use cases that makes it harder to use them in various contexts. In this thesis, we are presenting several approaches to enhance energy-efficiency, performance, reliability and usability of large-scale storage systems. To begin with, we improve the energy-efficiency of storage systems by moving I/O load to a subset of the storage nodes with energy-aware node allocation methods and turn off the unused nodes, while preserving load balance on demand. To address the metadata performance issue associated with large creates and directory reads, we represent directories with object storage collections and implement lazy creation of objects. Similarly, in-situ computation on large-scale data is enabled by using object storage features to integrate a computational framework with the existing object storage layer to eliminate the need to transfer data between compute and storage silos for better performance. We then present parity-based redundancy using object storage features to achieve reliability with less performance impact. Finally, unified storage brings together the object storage features to meet the needs of distinct use cases; such as cloud storage, big data or high-performance computing to alleviate the unnecessary fragmentation of storage resources. We evaluate each proposed approach thoroughly and validate their effectiveness in terms of improving energy-efficiency, performance, reliability and usability of a large-scale storage system
    • …
    corecore