1,639 research outputs found

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    Complementarities in human capital production:The Importance of Gene-Environment Interactions

    Get PDF

    Assessing the Role and Regulatory Impact of Digital Assets in Decentralizing Finance

    Get PDF
    This project will explore the development of decentralized financial (DeFi) markets since the first introduction of digital assets created through the application of a form of distributed ledger technology (DLT), known as blockchain, in 2008. More specifically, a qualitative inquiry of the role of digital assets in relation to traditional financial markets infrastructure will be conducted in order to answer the following questions: (i) can the digital asset and decentralized financial markets examined in this thesis co-exist with traditional assets and financial markets, and, if so, (ii) are traditional or novel forms of regulation (whether financial or otherwise) needed or desirable for the digital asset and decentralized financial markets examined herein? The aim of this project will be to challenge a preliminary hypothesis that traditional and decentralized finance can be compatible; provided, that governments and other centralized authorities approach market innovations as an opportunity to improve existing monetary infrastructure and delivery of financial services (both in the public and private sector), rather than as an existential threat. Thus, this thesis seeks to establish that, through collaborating with private markets to identify the public good to which DeFi markets contribute, the public sector can foster an appropriate environment which is both promotive and protective of the public interest without unduly stifling innovation and progress

    Analysis and monitoring of single HaCaT cells using volumetric Raman mapping and machine learning

    Get PDF
    No explorer reached a pole without a map, no chef served a meal without tasting, and no surgeon implants untested devices. Higher accuracy maps, more sensitive taste buds, and more rigorous tests increase confidence in positive outcomes. Biomedical manufacturing necessitates rigour, whether developing drugs or creating bioengineered tissues [1]–[4]. By designing a dynamic environment that supports mammalian cells during experiments within a Raman spectroscope, this project provides a platform that more closely replicates in vivo conditions. The platform also adds the opportunity to automate the adaptation of the cell culture environment, alongside spectral monitoring of cells with machine learning and three-dimensional Raman mapping, called volumetric Raman mapping (VRM). Previous research highlighted key areas for refinement, like a structured approach for shading Raman maps [5], [6], and the collection of VRM [7]. Refining VRM shading and collection was the initial focus, k-means directed shading for vibrational spectroscopy map shading was developed in Chapter 3 and exploration of depth distortion and VRM calibration (Chapter 4). “Cage” scaffolds, designed using the findings from Chapter 4 were then utilised to influence cell behaviour by varying the number of cage beams to change the scaffold porosity. Altering the porosity facilitated spectroscopy investigation into previously observed changes in cell biology alteration in response to porous scaffolds [8]. VRM visualised changed single human keratinocyte (HaCaT) cell morphology, providing a complementary technique for machine learning classification. Increased technical rigour justified progression onto in-situ flow chamber for Raman spectroscopy development in Chapter 6, using a Psoriasis (dithranol-HaCaT) model on unfixed cells. K-means-directed shading and principal component analysis (PCA) revealed HaCaT cell adaptations aligning with previous publications [5] and earlier thesis sections. The k-means-directed Raman maps and PCA score plots verified the drug-supplying capacity of the flow chamber, justifying future investigation into VRM and machine learning for monitoring single cells within the flow chamber

    Complementarities in human capital production:The Importance of Gene-Environment Interactions

    Get PDF

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    TransEdge: Supporting Efficient Read Queries Across Untrusted Edge Nodes

    Full text link
    We propose Transactional Edge (TransEdge), a distributed transaction processing system for untrusted environments such as edge computing systems. What distinguishes TransEdge is its focus on efficient support for read-only transactions. TransEdge allows reading from different partitions consistently using one round in most cases and no more than two rounds in the worst case. TransEdge design is centered around this dependency tracking scheme including the consensus and transaction processing protocols. Our performance evaluation shows that TransEdge's snapshot read-only transactions achieve an 9-24x speedup compared to current byzantine systems

    30th European Congress on Obesity (ECO 2023)

    Get PDF
    This is the abstract book of 30th European Congress on Obesity (ECO 2023

    Integrating materials supply in strategic mine planning of underground coal mines

    Get PDF
    In July 2005 the Australian Coal Industry’s Research Program (ACARP) commissioned Gary Gibson to identify constraints that would prevent development production rates from achieving full capacity. A “TOP 5” constraint was “The logistics of supply transport distribution and handling of roof support consumables is an issue at older extensive mines immediately while the achievement of higher development rates will compound this issue at most mines.” Then in 2020, Walker, Harvey, Baafi, Kiridena, and Porter were commissioned by ACARP to investigate Australian best practice and progress made since Gibson’s 2005 report. This report was titled: - “Benchmarking study in underground coal mining logistics.” It found that even though logistics continue to be recognised as a critical constraint across many operations particularly at a tactical / day to day level, no strategic thought had been given to logistics in underground coal mines, rather it was always assumed that logistics could keep up with any future planned design and productivity. This subsequently meant that without estimating the impact of any logistical constraint in a life of mine plan, the risk of overvaluing a mining operation is high. This thesis attempts to rectify this shortfall and has developed a system to strategically identify logistics bottlenecks and the impacts that mine planning parameters might have on these at any point in time throughout a life of mine plan. By identifying any logistics constraints as early as possible, the best opportunity to rectify the problem at the least expense is realised. At the very worst if a logistics constraint was unsolvable then it could be understood, planned for, and reflected in the mine’s ongoing financial valuations. The system developed in this thesis, using a suite of unique algorithms, is designed to “bolt onto” existing mine plans in the XPAC mine scheduling software package, and identify at a strategic level the number of material delivery loads required to maintain planned productivity for a mining operation. Once an event was identified the system then drills down using FlexSim discrete event simulation to a tactical level to confirm the predicted impact and understand if a solution can be transferred back as a long-term solution. Most importantly the system developed in this thesis was designed to communicate to multiple non-technical stakeholders through simple graphical outputs if there is a risk to planned production levels due to a logistics constraint
    corecore