40 research outputs found
What determines cell size?
AbstractFirst paragraph (this article has no abstract) For well over 100 years, cell biologists have been wondering what determines the size of cells. In modern times, we know all of the molecules that control the cell cycle and cell division, but we still do not understand how cell size is determined. To check whether modern cell biology has made any inroads on this age-old question, BMC Biology asked several heavyweights in the field to tell us how they think cell size is controlled, drawing on a range of different cell types. The essays in this collection address two related questions - why does cell size matter, and how do cells control it
PRIMS: Making NVRAM Suitable for Extremely Reliable Storage † Abstract
Non-volatile byte addressable memories are becoming more common, and are increasingly used for critical data that must not be lost. However, existing NVRAM-based file systems do not include features that guard against file system corruption or NVRAM corruption. Furthermore, most file systems check consistency only after the system has already crashed. We are designing PRIMS to address these problems by providing file storage that can survive multiple errors in NVRAM, whether caused by errant operating system writes or by media corruption. PRIMS uses an erasure-encoded log structure to store persistent metadata, making it possible to periodically verify the correctness of file system operations while achieving throughput rates of an order of magnitude higher than page-protection during small writes. It also checks integrity on every operation and performs on-line scans of the entire NVRAM to ensure that the file system is consistent. If errors are found, PRIMS can correct them using file system logs and extensive error correction information. While PRIMS is designed for reliability, we expect it to have excellent performance, thanks to the ability to do word-aligned reads and writes in NVRAM.
POTSHARDS: secure long-term storage without encryption
Users are storing ever-increasing amounts of information digitally, driven by many factors including government regulations and the public’s desire to digitally record their personal histories. Unfortunately, many of the security mechanisms that modern systems rely upon, such as encryption, are poorly suited for storing data for indefinitely long periods of time—it is very difficult to manage keys and update cryptosystems to provide secrecy through encryption over periods of decades. Worse, an adversary who can compromise an archive need only wait for cryptanalysis techniques to catch up to the encryption algorithm used at the time of the compromise in order to obtain “secure ” data. To address these concerns, we have developed POT-SHARDS, an archival storage system that provides longterm security for data with very long lifetimes without using encryption. Secrecy is achieved by using provably secure secret splitting and spreading the resulting shares across separately-managed archives. Providing availability and data recovery in such a system can be difficult; thus, we use a new technique, approximate pointers, in conjunction with secure distributed RAID techniques to provide availability and reliability across independent archives. To validate our design, we developed a prototype POTSHARDS implementation, which has demonstrated “normal ” storage and retrieval of user data using indexes, the recovery of user data using only the pieces a user has stored across the archives and the reconstruction of an entire failed archive.
Reliability mechanisms for file systems using non-volatile memory as a metadata store
Portable systems such as cell phones and portable media players commonly use non-volatile RAM (NVRAM) to hold all of their data and metadata, and larger systems can store metadata in NVRAM to increase file system performance by reducing synchronization and transfer overhead between disk and memory data structures. Unfortunately, wayward writes from buggy software and random bit flips may result in an unreliable persistent store. We introduce two orthogonal and complementary approaches to reliably storing file system structures in NVRAM. First, we reinforce hardware and operating system memory consistency by employing page-level write protection and error correcting codes. Second, we perform on-line consistency checking of the filesystem structures by replaying logged file system transactions on copied data structures; a structure is consistent if the replayed copy matches its live counterpart. Our experiments show that the protection mechanisms can increase fault tolerance by six orders of magnitude while incurring an acceptable amount of overhead on writes to NVRAM. Since NVRAM is much faster and consumes far less power than disk-based storage, the added overhead of error checking leaves an NVRAM-based system both faster and more reliable than a disk-based system. Additionally, our techniques can be implemented on systems lacking hardware support for memory management, allowing them to be used on lowend and embedded systems without an MMU
Analysis and Construction of Galois Fields for Efficient Storage Reliability
Software-based Galois field implementations are used in the reliability and security components of many storage systems. Unfortunately, multiplication and division operations over Galois fields are expensive, compared to the addition. To accelerate multiplication and division, most software Galois field implementations use pre-computed look-up tables, accepting the memory overhead associated with optimizing these operations. However, the amount of available memory constrains the size of a Galois field and leads to inconsistent performance across architectures. This is especially problematic in environments with limited memory, such as sensor networks. In this paper, we first analyze existing table-based implementation and optimization techniques for GF(2 l) multiplication and division. Next, we propose the use of techniques that perform multiplication and division in an extension of GF(2 l), where the actual multiplications and divisions are performed in a smaller field and combined. This approach allows different applications to share Galois field multiplication tables, regardless of the field size, while drastically lowering memory consumption. We evaluated multiple such approaches in terms of basic operation performance and memory consumption. We then evaluated different approaches for their suitability in common Galois field applications. Our experiments showed that the relative performance of each approach varies with processor architecture, and that CPU, memory limitations and field size must be considered when selecting an appropriate Galois field implementation. In particular, the use of extension fields is often faster and less memory-intensive than comparable approaches using standard algorithms for GF(2 l).