5,290 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    Colossal Trajectory Mining: A unifying approach to mine behavioral mobility patterns

    Get PDF
    Spatio-temporal mobility patterns are at the core of strategic applications such as urban planning and monitoring. Depending on the strength of spatio-temporal constraints, different mobility patterns can be defined. While existing approaches work well in the extraction of groups of objects sharing fine-grained paths, the huge volume of large-scale data asks for coarse-grained solutions. In this paper, we introduce Colossal Trajectory Mining (CTM) to efficiently extract heterogeneous mobility patterns out of a multidimensional space that, along with space and time dimensions, can consider additional trajectory features (e.g., means of transport or activity) to characterize behavioral mobility patterns. The algorithm is natively designed in a distributed fashion, and the experimental evaluation shows its scalability with respect to the involved features and the cardinality of the trajectory dataset

    Atomistic molecular simulations of Aβ-Zn conformational ensembles

    Get PDF
    The amyloid-forming Aβ peptide is able to interact with metal cations to form very stable complexes that influence fibril formation and contribute to the onset of Alzheimer's disease. Multiple structures of peptides derived from Aβ in complex with different metals have been resolved experimentally to provide an atomic-level description of the metal-protein interactions. However, Aβ is intrinsically disordered, and hence more amenable to an ensemble description. Molecular dynamics simulations can now reach the timescales needed to generate ensembles for these type of complexes. However, this requires accurate force fields both for the protein and the protein-metal interactions. Here we use state-of-the-art methods to generate force field parameters for the Zn(II) cations in a set of complexes formed by different Aβ variants and combine them with the Amber99SB*-ILDN optimized force field. Upon comparison of NMR experiments with the simulation results, further optimized with a Bayesian/Maximum entropy approach, we provide an accurate description of the molecular ensembles for most Aβ-metal complexes. We find that the resulting conformational ensembles are more heterogeneous than the NMR models deposited in the Protein Data Bank.Financial support comes from Eusko Jaurlaritza (Basque Government) through the project IT1584-22 and from the Spanish Ministry of Science and Universities through the Office of Science Research (MINECO/FEDER) through grant PID2021-127907NB-I00. DDS acknowledges the Spanish Ministry of Science and Universities for a Ramón y Cajal contract (Grant RYC-2016-19590)

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Deep Learning-Based Modeling of 5G Core Control Plane for 5G Network Digital Twin

    Full text link
    Digital twin is a key enabler to facilitate the development and implementation of new technologies in 5G and beyond networks. However, the complex structure and diverse functions of the current 5G core network, especially the control plane, lead to difficulties in building the core network of the digital twin. In this paper, we propose two novel data-driven architectures for modeling the 5G control plane and implement corresponding deep learning models, namely 5GC-Seq2Seq and 5GC-former, based on the Vanilla Seq2Seq model and Transformer decoder respectively. To train and test models, we also present a solution that allows the signaling messages to be interconverted with vectors, which can be utilized in dataset construction. The experiments are based on 5G core network signaling data collected by the Spirent C50 network tester, including various procedures related to registration, handover, PDU sessions, etc. Our results show that 5GC-Seq2Seq achieves over 99.98% F1-score (A metric to measure the accuracy of positive samples) with a relatively simple structure, while 5GC-former attains higher than 99.998% F1-score by establishing a more complex and highly parallel model, indicating that the method proposed in this paper reproduces the major functions of the core network control plane in 5G digital twin with high accuracy
    • …
    corecore