767 research outputs found

    ENHANCING CLOUD SYSTEM RUNTIME TO ADDRESS COMPLEX FAILURES

    Get PDF
    As the reliance on cloud systems intensifies in our progressively digital world, understanding and reinforcing their reliability becomes more crucial than ever. Despite impressive advancements in augmenting the resilience of cloud systems, the growing incidence of complex failures now poses a substantial challenge to the availability of these systems. With cloud systems continuing to scale and increase in complexity, failures not only become more elusive to detect but can also lead to more catastrophic consequences. Such failures question the foundational premises of conventional fault-tolerance designs, necessitating the creation of novel system designs to counteract them. This dissertation aims to enhance distributed systems’ capabilities to detect, localize, and react to complex failures at runtime. To this end, this dissertation makes contributions to address three emerging categories of failures in cloud systems. The first part delves into the investigation of partial failures, introducing OmegaGen, a tool adept at generating tailored checkers for detecting and localizing such failures. The second part grapples with silent semantic failures prevalent in cloud systems, showcasing our study findings, and introducing Oathkeeper, a tool that leverages past failures to infer rules and expose these silent issues. The third part explores solutions to slow failures via RESIN, a framework specifically designed to detect, diagnose, and mitigate memory leaks in cloud-scale infrastructures, developed in collaboration with Microsoft Azure. The dissertation concludes by offering insights into future directions for the construction of reliable cloud systems

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Natural type inference

    Get PDF
    Recently, dynamic language users have started to recognize the value of types in their code. To fulfil this need, many popular dynamic languages have adopted extensions that support type annotations. A prominent example is that of TypeScript which offers a module system, classes, interfaces, and an optional type system on top of JavaScript. However, providing usable (not too verbose, or complex) types via traditional type inference is more challenging in optional type systems. Motivated by this, we redefine the goal of type inference for optionally typed languages as: infer the maximally natural and sound type, instead of the most general one. By the maximally natural and sound, we refer to a type that (1) is derivable in the type system, and (2) maximally reflects the intention of the programmer with respect to a learnt model. We formally devise a type inference problem that aids the inference of the maximally natural type. Towards this goal, our problem asks to combine information derived from two sources: (1) from algorithmic type systems using deductive logic-based techniques; and (2) from the source code text using inductive machine learning techniques. To tackle our formulated problem, we develop two frameworks that combine the two sources of information using mathematical optimization. In the first framework, we formulate the inference problem as a problem in numerical optimization. In the second framework, we map the inference problem into popular problems in discrete optimization: maximum satisfiability (MaxSAT) and Integer Linear Programming (ILP). Both frameworks are built to be consistent with information derived from the different sources. Moreover, through formal proofs, we validate the soundness and completeness of the developed framework for a core lambda-calculus with named types. To assess the efficacy of the developed frameworks, we implement them in a tool named Optyper that realizes natural type inference for TypeScript. We evaluate Optyperon TypeSript programs obtained from real world projects. By evaluating our theoretical frameworks we show that, in practice, the combination of logical and natural constraints yields a large improvement in performance over either kind of information individually. Further, we demonstrate that our frameworks out-perform state-of-the-art techniques in type inference to produce natural and sound types

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Morris Catalog 2023-2025

    Get PDF
    This document serves as an official historical record for a specific period in time. The information found is subject to change without notice. Colleges and departments make changes to their degree requirements and course descriptions frequently. More information is available at catalogs.umn.edu.https://digitalcommons.morris.umn.edu/catalog/1034/thumbnail.jp

    Pre-deployment Analysis of Smart Contracts -- A Survey

    Full text link
    Smart contracts are programs that execute transactions involving independent parties and cryptocurrencies. As programs, smart contracts are susceptible to a wide range of errors and vulnerabilities. Such vulnerabilities can result in significant losses. Furthermore, by design, smart contract transactions are irreversible. This creates a need for methods to ensure the correctness and security of contracts pre-deployment. Recently there has been substantial research into such methods. The sheer volume of this research makes articulating state-of-the-art a substantial undertaking. To address this challenge, we present a systematic review of the literature. A key feature of our presentation is to factor out the relationship between vulnerabilities and methods through properties. Specifically, we enumerate and classify smart contract vulnerabilities and methods by the properties they address. The methods considered include static analysis as well as dynamic analysis methods and machine learning algorithms that analyze smart contracts before deployment. Several patterns about the strengths of different methods emerge through this classification process

    Understanding and Mitigating Flaky Software Test Cases

    Get PDF
    A flaky test is a test case that can pass or fail without changes to the test case code or the code under test. They are a wide-spread problem with serious consequences for developers and researchers alike. For developers, flaky tests lead to time wasted debugging spurious failures, tempting them to ignore future failures. While unreliable, flaky tests can still indicate genuine issues in the code under test, so ignoring them can lead to bugs being missed. The non-deterministic behaviour of flaky tests is also a major snag to continuous integration, where a single flaky test can fail an entire build. For researchers, flaky tests challenge the assumption that a test failure implies a bug, an assumption that many fundamental techniques in software engineering research rely upon, including test acceleration, mutation testing, and fault localisation. Despite increasing research interest in the topic, open problems remain. In particular, there has been relatively little attention paid to the views and experiences of developers, despite a considerable body of empirical work. This is essential to guide the focus of research into areas that are most likely to be beneficial to the software engineering industry. Furthermore, previous automated techniques for detecting flaky tests are typically either based on exhaustively rerunning test cases or machine learning classifiers. The prohibitive runtime of the rerunning approach and the demonstrably poor inter-project generalisability of classifiers leaves practitioners with a stark choice when it comes to automatically detecting flaky tests. In response to these challenges, I set two high-level goals for this thesis: (1) to enhance the understanding of the manifestation, causes, and impacts of flaky tests; and (2) to develop and empirically evaluate efficient automated techniques for mitigating flaky tests. In pursuit of these goals, this thesis makes five contributions: (1) a comprehensive systematic literature review of 76 published papers; (2) a literature-guided survey of 170 professional software developers; (3) a new feature set for encoding test cases in machine learning-based flaky test detection; (4) a novel approach for reducing the time cost of rerunning-based techniques for detecting flaky tests by combining them with machine learning classifiers; and (5) an automated technique that detects and classifies existing flaky tests in a project and produces reusable project-specific machine learning classifiers able to provide fast and accurate predictions for future test cases in that project

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Blockchain Software Verification and Optimization

    Get PDF
    In the last decade, blockchain technology has undergone a strong evolution. The maturity reached and the consolidation obtained have aroused the interest of companies and businesses, transforming it into a possible response to various industrial needs. However, the lack of standards and tools for the development and maintenance of blockchain software leaves open challenges and various possibilities for improvements. The goal of this thesis is to tackle some of the challenges proposed by blockchain technology, to design and implement analysis, processes, and architectures that may be applied in the real world. In particular, two topics are addressed: the verification of the blockchain software and the code optimization of smart contracts. As regards the verification, the thesis focuses on the original developments of tools and analyses able to detect statically, i.e. without code execution, issues related to non-determinism, untrusted cross-contracts invocation, and numerical overflow/underflow. Moreover, an approach based on on-chain verification is investigated, to proactively involve the blockchain in verifying the code before and after its deployment. For the optimization side, the thesis describes an optimization process for the code translation from Solidity language to Takamaka, also proposing an efficient algorithm to compute snapshots for fungible and non-fungible tokens. The results of this thesis are an important first step towards improving blockchain software development, empirically demonstrating the applicability of the proposed approaches and their involvement also in the industrial field
    • …
    corecore