28 research outputs found

    Energy Saving Techniques for Phase Change Memory (PCM)

    Full text link
    In recent years, the energy consumption of computing systems has increased and a large fraction of this energy is consumed in main memory. Towards this, researchers have proposed use of non-volatile memory, such as phase change memory (PCM), which has low read latency and power; and nearly zero leakage power. However, the write latency and power of PCM are very high and this, along with limited write endurance of PCM present significant challenges in enabling wide-spread adoption of PCM. To address this, several architecture-level techniques have been proposed. In this report, we review several techniques to manage power consumption of PCM. We also classify these techniques based on their characteristics to provide insights into them. The aim of this work is encourage researchers to propose even better techniques for improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM

    HEC: Collaborative Research: SAM^2 Toolkit: Scalable and Adaptive Metadata Management for High-End Computing

    Get PDF
    The increasing demand for Exa-byte-scale storage capacity by high end computing applications requires a higher level of scalability and dependability than that provided by current file and storage systems. The proposal deals with file systems research for metadata management of scalable cluster-based parallel and distributed file storage systems in the HEC environment. It aims to develop a scalable and adaptive metadata management (SAM2) toolkit to extend features of and fully leverage the peak performance promised by state-of-the-art cluster-based parallel and distributed file storage systems used by the high performance computing community. There is a large body of research on data movement and management scaling, however, the need to scale up the attributes of cluster-based file systems and I/O, that is, metadata, has been underestimated. An understanding of the characteristics of metadata traffic, and an application of proper load-balancing, caching, prefetching and grouping mechanisms to perform metadata management correspondingly, will lead to a high scalability. It is anticipated that by appropriately plugging the scalable and adaptive metadata management components into the state-of-the-art cluster-based parallel and distributed file storage systems one could potentially increase the performance of applications and file systems, and help translate the promise and potential of high peak performance of such systems to real application performance improvements. The project involves the following components: 1. Develop multi-variable forecasting models to analyze and predict file metadata access patterns. 2. Develop scalable and adaptive file name mapping schemes using the duplicative Bloom filter array technique to enforce load balance and increase scalability 3. Develop decentralized, locality-aware metadata grouping schemes to facilitate the bulk metadata operations such as prefetching. 4. Develop an adaptive cache coherence protocol using a distributed shared object model for client-side and server-side metadata caching. 5. Prototype the SAM2 components into the state-of-the-art parallel virtual file system PVFS2 and a distributed storage data caching system, set up an experimental framework for a DOE CMS Tier 2 site at University of Nebraska-Lincoln and conduct benchmark, evaluation and validation studies

    Deterministic Object Management in Large Distributed Systems

    Get PDF
    Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows

    Exploiting Set-Level Non-Uniformity of Capacity Demand to Enhance CMP Cooperative Caching

    Get PDF
    As the Memory Wall remains a bottleneck for Chip Multiprocessors (CMP), the effective management of CMP last level caches becomes of paramount importance in minimizing expensive off-chip memory accesses. For the CMPs with private last level caches, Cooperative Caching (CC) has been proposed to enable capacity sharing among private caches by spilling an evicted block from one cache to another. But this eviction-driven CC does not necessarily promote cache performance since it implicitly favors the applications full of block evictions regardless of their real capacity demand. The recent Dynamic Spill-Receive (DSR) paradigm improves cooperative caching by prioritizing applications with higher benefit from extra capacity in spilling blocks. However, the DSR paradigm only exploits the coarse-grained application-level difference in capacity demand, making it less effective as the non-uniformity exists at a much finer level. This paper (i) highlights the observation of cache set-level non-uniformity of capacity demand, and (ii) presents a novel L2 cache design, named SNUG (Set-level Non-Uniformity identifier and Grouper), to exploit the fine-grained non-uniformity to further enhance the effectiveness of cooperative caching. By utilizing a per-set shadow tag array and saturating counter, SNUG can identify whether a set should either spill or receive blocks; by using an index-bit flipping scheme, SNUG can group peer sets for spilling and receiving in an flexible way, capturing more opportunities for cooperative caching. We evaluate our design through extensive execution-driven simulations on Quald-core CMP systems. Our results show that for 6 classes of workload combinations our SNUG cache can improve the CMP throughput by up to 22.3%, with an average of 13.9%, over the baseline configuration, while the state-of-the-art DSR scheme can only achieve an improvement by up to 14.5% and 8.4% on average

    Ant colony optimization on runtime reconfigurable architectures

    Get PDF

    Carbon Capture and Storage the Clean Development Mechanism : Underlying Regulatory and Risk Management Issues

    Get PDF
    Owing to the immediate nature of global warming, some countries like those in the EU indicate that up to 30% of their mitigation strategy for 2050 should be CCS technology based. The need to diversify and use different approaches within climate change mitigation mix cannot be overstated; hence technologies that contribute to the overall mitigation strategy must be in tandem as each has a role to play. Contingent on this therefore, is the need to consider different but equally important factors along with the technologies being used, their strategic locations and other resources needed to bring about the climate change mitigation. To this end, it is expedient that the search for the appropriate jurisdictions with adequate regulatory and correct geological profiles should not be undermined by restricting advanced technological climate change mitigation strategies to developed or economically/technologically advanced countries. The spread to include nations hitherto not economically or technologically advanced but have the potential and capacity either in terms of geology, or proximity to carbon emission sources or other viable resources should be encouraged due to the urgency needed to abate climate change effects nationally and globally. Suffice to say, such jurisdictions need to develop the right regulatory and policy frameworks in order to be fit for purpose. The uniqueness of this thesis underscores these observations by research into different risk indicators and strategies such as risk assessment and management, exploring the potential CCS-CDM linkage using regulatory/legal and risk indicators, identifying and analysing the regulatory, legal elements and the geological profiling vis-Ă -vis analogous operations in the implementation of CCS under CDM Kyoto Protocol in a Non Annex 1 country using Nigeria as a case study country; and finally point to tentative means of linking CCS and CDM

    Negotiating Software: Redistributing Control at Work and on the Web

    Get PDF
    Since the 1970s, digital technologies increasingly determine who gets what, when, and how; and the workings of informational capitalism have concentrated control over those technologies into the hands of just a few private corporations. The normative stance of this dissertation is that control over software should be distributed and subject to processes of negotiation: consensus-based decision making oriented towards achieving collective benefits. It explores the boundaries of negotiating software by trying to effect a change in two different kinds of software using two different approaches. The first approach targets application software – the paradigmatic model of commodified, turn-key computational media – in the context of knowledge work – labour that involves the creation and distribution of information through non-routine, creative, and abstract thinking. It tries to effect change by developing negotiable software as an alternative to the autocratic application model, which is software that embeds the support for distributed control in and over its design. These systems successfully demonstrate the technological feasibility of this approach, but also the limitations of design as a solution to systemic power asymmetries. The second approach targets consent management platforms – pop-up interfaces on the web that capture visitor’s consent for data processing – in the context of the European Union’s data protection regulation. It tries to effect change by employing negotiation software, which is software that supports existing processes of negotiation in complex systems, i.e., regulatory oversight and the exercise of digital rights. This approach resulted in a considerable increase in data protection compliance on Danish websites, but showed that sustainable enforcement using digital tools also requires design changes to data processing technologies. Both approaches to effecting software change – making software negotiable and using software in negotiations – revealed the drawbacks of individualistic strategies. Ultimately, the capacity of the liberal subject to stand up against corporate power is limited, and more collective approaches to software negotiation need to be developed, whether when making changes to designs or leveraging regulation

    7th International Conference on Higher Education Advances (HEAd'21)

    Full text link
    Information and communication technologies together with new teaching paradigms are reshaping the learning environment.The International Conference on Higher Education Advances (HEAd) aims to become a forum for researchers and practitioners to exchange ideas, experiences,opinions and research results relating to the preparation of students and the organization of educational systems.Doménech I De Soria, J.; Merello Giménez, P.; Poza Plaza, EDL. (2021). 7th International Conference on Higher Education Advances (HEAd'21). Editorial Universitat Politècnica de València. https://doi.org/10.4995/HEAD21.2021.13621EDITORIA
    corecore