1,017 research outputs found

    Write-rationing garbage collection for hybrid memories

    Get PDF
    Emerging Non-Volatile Memory (NVM) technologies offer high capacity and energy efficiency compared to DRAM, but suffer from limited write endurance and longer latencies. Prior work seeks the best of both technologies by combining DRAM and NVM in hybrid memories to attain low latency, high capacity, energy efficiency, and durability. Coarse-grained hardware and OS optimizations then spread writes out (wear-leveling) and place highly mutated pages in DRAM to extend NVM lifetimes. Unfortunately even with these coarse-grained methods, popular Java applications exact impractical NVM lifetimes of 4 years or less. This paper shows how to make hybrid memories practical, without changing the programming model, by enhancing garbage collection in managed language runtimes. We find object write behaviors offer two opportunities: (1) 70% of writes occur to newly allocated objects, and (2) 2% of objects capture 81% of writes to mature objects. We introduce writerationing garbage collectors that exploit these fine-grained behaviors. They extend NVM lifetimes by placing highly mutated objects in DRAM and read-mostly objects in NVM. We implement two such systems. (1) Kingsguard-nursery places new allocation in DRAM and survivors in NVM, reducing NVM writes by 5x versus NVM only with wear-leveling. (2) Kingsguard-writers (KG-W) places nursery objects in DRAM and survivors in a DRAM observer space. It monitors all mature object writes and moves unwritten mature objects from DRAM to NVM. Because most mature objects are unwritten, KG-W exploits NVM capacity while increasing NVM lifetimes by 11x. It reduces the energy-delay product by 32% over DRAM-only and 29% over NVM-only. This work opens up new avenues for making hybrid memories practical

    Crystal gazer : profile-driven write-rationing garbage collection for hybrid memories

    Get PDF
    Non-volatile memories (NVM) offer greater capacity than DRAM but suffer from high latency and low write endurance. Hybrid memories combine DRAM and NVM to form scalable memory systems with the promise of high capacity, low energy consumption, and high endurance. Automatically managing hybrid NVM-DRAM memories to achieve their promise without changing user applications or their programming models remains an open question. This paper uses garbage collection in managed languages to exploit NVM capacity while preventing NVM wear out in hybrid memories with no changes to the programming model. We introduce profile-driven write-rationing garbage collection. Allocation sites that produce frequently written objects are predicted based on previous program executions. Objects are initially allocated in a DRAM nursery space. The collector copies surviving nursery objects from highly written sites to a mature DRAM space and read-mostly objects to a mature NVM space.Write-intensity prediction for 15 Java benchmarks accurately places objects in the correct space, eliminating expensive object monitoring from prior write-rationing garbage collectors. Furthermore, our technique exposes a Pareto tradeoff between DRAM usage and NVM lifetime, unlike prior work. Experimental results on NUMA hardware that emulates hybrid NVM-DRAM memory demonstrates that profile-driven write-rationing garbage collection reduces the number of writes to NVM compared to prior work to extend its lifetime, maximizes the use of NVM for its capacity, and achieves good performance

    Persistent object stores

    Get PDF
    The design and development of a type secure persistent object store is presented as part of an architecture to support experiments in concurrency, transactions and distribution. The persistence abstraction hides the physical properties of data from the programs that manipulate it. Consequently, a persistent object store is required to be of unbounded size, infinitely fast and totally reliable. A range of architectural mechanisms that can be used to simulate these three features is presented. Based on a suitable selection of these mechanisms, two persistent object stores are presented. The first store is designed for use with the programming language PS-algol. Its design is evolved to yield a more flexible layered architecture. The layered architecture is designed to provide each distinct architectural mechanism as a separate architectural layer conforming to a specified interface. The motivation for this design is two-fold. Firstly, the particular choice of layers greatly simplifies the resulting implementation and secondly, the layered design can support experimental architecture implementations. Since each layer conforms to a specified interface, it is possible to experiment with the implementation of an individual layer without affecting the implementation of the remaining architectural layers. Thus, the layered architecture is a convenient vehicle for experimenting with the implementation of persistent object stores. An implementation of the layered architecture is presented together with an example of how it may be used to support a distributed system. Finally, the architecture's ability to support a variety of storage configurations is presented

    “They Collected What Was Left of the Scraps”: Food Surplus as an Opportunity and Its Legal Incentives

    Get PDF
    For many years the problem of food security has been addressed only in relation to developing countries, due to the fact that people in developed nations had a relatively abundant supply of food. This is not anymore true both because of the economic crisis and an increasing demand of food at the global level. Therefore, food surplus in the food chain both at the production level and at household consumption could become a resource. In this respect, legal rules (e.g., the Good Samaritan Act in the United States) may provide incentives to economic agents for recovering food surplus. This paper examines in a comparative way legal remedies provided by United States and European Union to address food surplus. Some suggestions are provided to further improve the systems as well

    Reclaiming Waste, Remaking Communities: Persistence and Change in Delhi's Informal Garbage Economy

    Full text link
    Reclaiming Waste, Remaking Communities: Persistence and Change in Delhi's Informal Garbage Economy examines the unanticipated impact of expanded municipal garbage collection services in Delhi, India in the mid-2000s through public-private partnerships (PPP) that included collection trucks and incinerators. Drawing on twenty months of ethnographic research, I ask how it is that informal collectors, who rely on pedal-powered tricycle carts and their hands to extract recyclables, have survived the expansion of these formal services that threatened their livelihoods and the city's only system for recycling. Despite being heavily supported by the government, these PPP services were effectively stalled and transformed by the resilience of the collector-recyclers’ unofficial enterprise, ensuring the continuation of a recycling network. The manuscript addresses the following questions: What do economic relations look like in this context, and what kinds of moral economies configure them? How are social relations and status distinctions reproduced and transformed through transactions of garbage and money? And how does the legacy, experience, and threat of stigmatization—embodied in the idea and object of garbage and ranging in scale from individual practice to global reputation maintenance—shape transactional possibilities? Revealing how forms of economic life across multiple scales depend on caste/community relations, the navigation of caste and (post)colonial stigma, and the reproduction of status through transactions, the dissertation brings together literatures from economic sociology and anthropology, political ecology, and theories of caste/race in order to explain persistent forms of unofficial economic organization.PHDSociologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162989/1/danakorn_1.pd

    Analyse des performances de stockage, en mémoire et sur les périphériques d'entrée/sortie, à partir d'une trace d'exécution

    Get PDF
    Le stockage des donnĂ©es est vital pour l’industrie informatique. Les supports de stockage doivent ĂȘtre rapides et fiables pour rĂ©pondre aux demandes croissantes des entreprises. Les technologies de stockage peuvent ĂȘtre classifiĂ©es en deux catĂ©gories principales : stockage de masse et stockage en mĂ©moire. Le stockage de masse permet de sauvegarder une grande quantitĂ© de donnĂ©es Ă  long terme. Les donnĂ©es sont enregistrĂ©es localement sur des pĂ©riphĂ©riques d’entrĂ©e/sortie, comme les disques durs (HDD) et les Solid-State Drive (SSD), ou en ligne sur des systĂšmes de stockage distribuĂ©. Le stockage en mĂ©moire permet de garder temporairement les donnĂ©es nĂ©cessaires pour les programmes en cours d’exĂ©cution. La mĂ©moire vive est caractĂ©risĂ©e par sa rapiditĂ© d’accĂšs, indispensable pour fournir rapidement les donnĂ©es Ă  l’unitĂ© de calcul du processeur. Les systĂšmes d’exploitation utilisent plusieurs mĂ©canismes pour gĂ©rer les pĂ©riphĂ©riques de stockage, par exemple les ordonnanceurs de disque et les allocateurs de mĂ©moire. Le temps de traitement d’une requĂȘte de stockage est affectĂ© par l’interaction entre plusieurs soussystĂšmes, ce qui complique la tĂąche de dĂ©bogage. Les outils existants, comme les outils d’étalonnage, permettent de donner une vague idĂ©e sur la performance globale du systĂšme, mais ne permettent pas d’identifier prĂ©cisĂ©ment les causes d’une mauvaise performance. L’analyse dynamique par trace d’exĂ©cution est trĂšs utile pour l’étude de performance des systĂšmes. Le traçage permet de collecter des donnĂ©es prĂ©cises sur le fonctionnement du systĂšme, ce qui permet de dĂ©tecter des problĂšmes de performance difficilement identifiables. L’objectif de cette thĂšse est de fournir un outil permettant d’analyser les performances de stockage, en mĂ©moire et sur les pĂ©riphĂ©riques d’entrĂ©e/sortie, en se basant sur les traces d’exĂ©cution. Les dĂ©fis relevĂ©s par cet outil sont : collecter les donnĂ©es nĂ©cessaires Ă  l’analyse depuis le noyau et les programmes en mode utilisateur, limiter le surcoĂ»t du traçage et la taille des traces gĂ©nĂ©rĂ©es, synchroniser les diffĂ©rentes traces, fournir des analyses multiniveau couvrant plusieurs aspects de la performance et enfin proposer des abstractions permettant aux utilisateurs de facilement comprendre les traces.----------ABSTRACT: Data storage is an essential resource for the computer industry. Storage devices must be fast and reliable to meet the growing demands of the data-driven economy. Storage technologies can be classified into two main categories: mass storage and main memory storage. Mass storage can store large amounts of data persistently. Data is saved locally on input/output devices, such as Hard Disk Drives (HDD) and Solid-State Drives (SSD), or remotely on distributed storage systems. Main memory storage temporarily holds the necessary data for running programs. Main memory is characterized by its high access speed, essential to quickly provide data to the Central Processing Unit (CPU). Operating systems use several mechanisms to manage storage devices, such as disk schedulers and memory allocators. The processing time of a storage request is affected by the interaction between several subsystems, which complicates the debugging task. Existing tools, such as benchmarking tools, provide a general idea of the overall system performance, but do not accurately identify the causes of poor performance. Dynamic analysis through execution tracing is a solution for the detailed runtime analysis of storage systems. Tracing collects precise data about the internal behavior of the system, which helps in detecting performance problems that are difficult to identify. The goal of this thesis is to provide a tool to analyze storage performance based on lowlevel trace events. The main challenges addressed by this tool are: collecting the required data using kernel and userspace tracing, limiting the overhead of tracing and the size of the generated traces, synchronizing the traces collected from different sources, providing multi-level analyses covering several aspects of storage performance, and lastly proposing abstractions allowing users to easily understand the traces. We carefully designed and inserted the instrumentation needed for the analyses. The tracepoints provide full visibility into the system and track the lifecycle of storage requests, from creation to processing. The Linux Trace Toolkit Next Generation (LTTng), a free and low-overhead tracer, is used for data collection. This tracer is characterized by its stability, and efficiency with highly parallel applications, thanks to the lock-free synchronization mechanisms used to update the content of the trace buffers. We also contributed to the creation of a patch that allows LTTng to capture the call stacks of userspace events

    Design of a network filing system

    Get PDF

    Environmental Crisis and the Paradox of Organizing

    Get PDF
    Public organizations, including those involved in contingency planning, have tremendous influence over the ultimate scale and scope of an environmental crisis. Yet our understanding of how organizational behavior can either rein in or exacerbate crises continues to lag behind advances in technology. This Article considers the role of public organizations in the blowout of the Macondo well in the Gulf of Mexico. Its theoretical lens is the “paradox of organizing,” a frame that I suggest should be applied to interorganizational responses to low-probability, high-consequence events. The struggle to differentiate tasks and subunits and then piece them together during moments of great uncertainty can challenge and strain contingency planning, such as what is envisioned by the National Contingency Plan. Through the paradox of organizing, the organizational roots of a crisis, such as the accidental release of oil or hazardous substances, are recreated and amplified during an interorganizational response to that crisis. I discuss several dynamics that were reproduced by the response system awakened by the Deepwater Horizon oil spill. They included risk amplification and system degradation due to the structure of the response, through processes including “anarchy,” “drift,” and “fire fighting.” They also involved the tasks of making sense of information within the response effort, which erases detail, limits whether data can be used to detect anomalies, and encourages responders to develop their own plausible rationales for equivocal data so that they can resume interrupted tasks. These dynamics go beyond the narratives that dominate standard regulatory accounts of accidents. They point to how multiagency response can intensify the paradox of organizing
    • 

    corecore