61 research outputs found

    The treewidth of smart contracts

    Get PDF
    Smart contracts are programs that are stored and executed on the Blockchain and can receive, manage and transfer money (cryptocurrency units). Two important problems regarding smart contracts are formal analysis and compiler optimization. Formal analysis is extremely important, because smart contracts hold funds worth billions of dollars and their code is immutable after deployment. Hence, an undetected bug can cause significant financial losses. Compiler optimization is also crucial, because every action of a smart contract has to be executed by every node in the Blockchain network. Therefore, optimizations in compiling smart contracts can lead to significant savings in computation, time and energy. Two classical approaches in program analysis and compiler optimization are intraprocedural and interprocedural analysis. In intraprocedural analysis, each function is analyzed separately, while interprocedural analysis considers the entire program. In both cases, the analyses are usually reduced to graph problems over the control flow graph (CFG) of the program. These graph problems are often computationally expensive. Hence, there has been ample research on exploiting structural properties of CFGs for efficient algorithms. One such well-studied property is the treewidth, which is a measure of tree-likeness of graphs. It is known that intraprocedural CFGs of structured programs have treewidth at most 6, whereas the interprocedural treewidth cannot be bounded. This result has been used as a basis for many efficient intraprocedural analyses. In this paper, we explore the idea of exploiting the treewidth of smart contracts for formal analysis and compiler optimization. First, similar to classical programs, we show that the intraprocedural treewidth of structured Solidity and Vyper smart contracts is at most 9. Second, for global analysis, we prove that the interprocedural treewidth of structured smart contracts is bounded by 10 and, in sharp contrast with classical programs, treewidth-based algorithms can be easily applied for interprocedural analysis. Finally, we supplement our theoretical results with experiments using a tool we implemented for computing treewidth of smart contracts and show that the treewidth is much lower in practice. We use 36,764 real-world Ethereum smart contracts as benchmarks and find that they have an average treewidth of at most 3.35 for the intraprocedural case and 3.65 for the interprocedural case

    An Efficient Algorithm for Computing Network Reliability in Small Treewidth

    Full text link
    We consider the classic problem of Network Reliability. A network is given together with a source vertex, one or more target vertices, and probabilities assigned to each of the edges. Each edge appears in the network with its associated probability and the problem is to determine the probability of having at least one source-to-target path. This problem is known to be NP-hard. We present a linear-time fixed-parameter algorithm based on a parameter called treewidth, which is a measure of tree-likeness of graphs. Network Reliability was already known to be solvable in polynomial time for bounded treewidth, but there were no concrete algorithms and the known methods used complicated structures and were not easy to implement. We provide a significantly simpler and more intuitive algorithm that is much easier to implement. We also report on an implementation of our algorithm and establish the applicability of our approach by providing experimental results on the graphs of subway and transit systems of several major cities, such as London and Tokyo. To the best of our knowledge, this is the first exact algorithm for Network Reliability that can scale to handle real-world instances of the problem.Comment: 14 page

    LNCS

    Get PDF
    Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2) time, given a tree decomposition of the MC with width t. Our results also imply a bound of O(κ⋅(n+m)⋅t2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude

    Faster Algorithms for Quantitative Analysis of Markov Chains and Markov Decision Processes with Small Treewidth

    Full text link
    Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with nn states and mm transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)⋅t2)O((n+m)\cdot t^2) time, given a tree decomposition of the MC that has width tt. Our results also imply a bound of O(κ⋅(n+m)⋅t2)O(\kappa\cdot (n+m)\cdot t^2) for each objective on MDPs, where κ\kappa is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experimental results show that on MCs and MDPs with small treewidth, our algorithms outperform existing well-established methods by one or more orders of magnitude

    Faster Algorithms for Dynamic Algebraic Queries in Basic RSMs with Constant Treewidth

    Get PDF
    Interprocedural analysis is at the heart of numerous applications in programming languages, such as alias analysis, constant propagation, and so on. Recursive state machines (RSMs) are standard models for interprocedural analysis. We consider a general framework with RSMs where the transitions are labeled from a semiring and path properties are algebraic with semiring operations. RSMs with algebraic path properties can model interprocedural dataflow analysis problems, the shortest path problem, the most probable path problem, and so on. The traditional algorithms for interprocedural analysis focus on path properties where the starting point is fixed as the entry point of a specific method. In this work, we consider possible multiple queries as required in many applications such as in alias analysis. The study of multiple queries allows us to bring in an important algorithmic distinction between the resource usage of the one-time preprocessing vs for each individual query. The second aspect we consider is that the control flow graphs for most programs have constant treewidth. Our main contributions are simple and implementable algorithms that support multiple queries for algebraic path properties for RSMs that have constant treewidth. Our theoretical results show that our algorithms have small additional one-time preprocessing but can answer subsequent queries significantly faster as compared to the current algorithmic solutions for interprocedural dataflow analysis. We have also implemented our algorithms and evaluated their performance for performing on-demand interprocedural dataflow analysis on various domains, such as for live variable analysis and reaching definitions, on a standard benchmark set. Our experimental results align with our theoretical statements and show that after a lightweight preprocessing, on-demand queries are answered much faster than the standard existing algorithmic approaches

    Development of a Framework for CPS Open Standards and Platforms

    Get PDF
    This technical report describes a Framework we have developed through our research and investigations in this project, with the goal to facilitate creation of Open Standards and Platforms for CPS; a task that addresses a critical mission for NIST. The rapid development of information technology (in terms of processing power, embedded hardware and software systems, comprehensive IT management systems, networking and Internet growth, system design environments) is producing an increasing number of applications and opening new doors. In addition over the last decade we entered a new era where systems complexity has increased dramatically. Complexity is increased both by the number of components that are included in each system as well as by the dependencies between those components. Increasingly, systems tend to be more software dependent and that is a major challenge that engineers involved in the development of such systems face. The challenge is even greater when a safety critical system is considered, like an airplane or a passenger car. Software-intensive systems and devices have become everyday consumables. There is a need for development of software that is provably error-free. Thanks to their multifaceted support for networking and inclusion of data and services from global networks, systems are evolving to form integrated, overarching solutions that are increasingly penetrating all areas of life and work. When software dependent systems interact with the physical environment then we have the class of cyber-physical systems (CPS) [1, 2]. The challenge in CPS is to incorporate the inputs (and their characteristics and constraints) from the physical components in the logic of the cyber components (hardware and software). CPS are engineered systems constructed as networked interactions of physical and computational (cyber) components. In CPS, computations and communication are deeply embedded in and interacting with physical processes, and add new capabilities to physical systems. Competitive pressure and societal needs drive industry to design and deploy airplanes and cars that are more energy efficient and safe, medical devices and systems that are more dependable, defense systems that are more autonomous and secure. Whole industrial sectors are transformed by new product lines that are CPS-based. Modern CPSs are not simply the connection of two different kinds of components engineered by means of distinct design technology, but rather, a new system category that is both physical and computational [1, 2]. Current industrial experience tells us that, in fact, we have reached the limits of our knowledge of how to combine computers and physical systems. The shortcomings range from technical limitations in the foundations of cyber-physical systems to the way we organize our industries and educate engineers and scientists that support cyber-physical system design. If we continue to build systems using our very limited methods and tools but lack the science and technology foundations, we will create significant risks, produce failures and lead to loss of market. Nowadays, with increasing frequency we observe systems that cooperate to achieve a common goal, even though there were not built for that reason. These are called systems of systems. For example, the Global Positioning System (GPS) is a system by itself. However, it needs to cooperate with other systems when the air traffic control system of systems is under 3 consideration. The analysis and development of such systems should be done carefully because of the emergent behavior that systems exhibit when they are coupled with other systems. However, apart from the increasing complexity and the other technical challenges, there is a need to decrease time-to-market for new systems as well as the associated costs. This specific trend and associated requirements, which are an outcome of global competitiveness, are expected to continue and become even more stringent. If a successful contribution is to be made in shaping this change, the revolutionary potential of CPS must be recognized and incorporated into internal development processes at an early stage. For that Interoperability and Integratability of CPS is critical. In this Task we have developed a Framework to facilitate interoperability and integratability of CPS via Open Standards and Platforms. The purpose of this technical report is to introduce this Framework and its critical components, to provide various instantiations of it, and to describe initial successful applications of it in various important classes of CPS. An additional goal of publishing this technical report is to solicit feedback on the proposed Framework, and to catalyze discussions and interactions in the broader CPS technical community towards improving and strengthening this Framework. CPS integrate data and services from different systems which were developed independently and with disparate objectives, thereby enabling new functionalities and benefits. Currently there is a lack of well-defined interfaces that on the one hand define the standards for the form and content of the data being exchanged, but on the other hand take account of non-functional aspects of this data, such as differing levels of data quality or reliability. A similar situation exists with respect to tools and synthesis environments, although some work has been initiated in the latter. The technological prerequisite for the design of the aforementioned various functions and value added services of CPS is the interoperability and integratability of these systems as well as their capability to be adapted flexibly and application-specifically as well as extended at the different levels of abstraction. Dependent on the objective and scope of the application, it may be necessary to integrate component functions (Embedded Systems (ES), System of Systems (SoS), CPS), to establish communication and interfaces, and to ensure the required level of quality of interaction and also of the overall system behavior. This requires cross-domain concepts for architecture, communication and compatibility at all levels. The effects of these factors on existing or yet undeveloped systems and architectures represent a major challenge. Investigation into these factors is the objective of current national and international studies and research projects. CPS create core technological challenges for traditional system architectures, especially because of their high degree of connectivity. This is because CPS are not constructed for one specific purpose or function, but rather are open for many different services and processes, and must therefore be adaptable. In view of their evolutionary nature, they are only controllable to a limited extent. This creates new demands for greater interoperability and communication within CPS that cannot be met by current closed systems. In particular, the differences in the characteristics of embedded systems in relation to IT systems and services and data in networks lead to outstanding questions in relation to the form of architectures, the definition of system and communication interfaces and requirements for underlying CPS platforms with basic services and parallel architectures at different levels of abstraction. 4 The technological developments underlying CPS evolution require the development of standards in the individual application domains, as well as basic infrastructure investments that cannot be borne by individual companies alone. This is particularly significant for SMEs. The development and operation of uniform platforms to migrate individual services and products will therefore be as much of a challenge as joint specification standards. The creation of such quasi standards, less in the traditional mold of classic industry norms and standards and more in the sense of de facto standards that become established on the basis of technological and market dominance, will become an essential part of technological and market leadership. To summarize and emphasize, the complexity of the subject in terms of the required technologies and capabilities of CPS, as well as the capabilities and competences required to develop, control and design/ create innovative, usable CPS applications, demand fundamentally integrated action, interdisciplinarity (research and development, economy and society) and vertical and horizontal efforts in: The creation of open, cross-domain platforms with fundamental services (communication, networking, interoperability) and architectures (including domainspecific architectures); The complementary expansion and integration of application fields and environments with vertical experimentation platforms and correspondingly integrated interdisciplinary efforts; The systematic enhancement with respect to methods and technologies across all involved disciplines to create innovative CPS. The aim of our research and investigations under this Task of the project, was precisely to clarify these objectives and systematically develop detailed recommendations for action. Our research and investigations have identified the following essential and fundamental challenges for the modeling, design, synthesis and manufacturing of CPS: (i) The creation and demonstration of a framework for developing cross-domain integrated modeling hubs for CPS. (ii) The creation and demonstration of a framework for linking the integrated CPS modeling hub of (i) with powerful and diverse tradeoff analysis methods and tools for design exploration for CPS. (iii) The creation of a framework of linking the integrated CPS synthesis environment of (i) and (ii) with databases of modular component and process (manufacturing) models, backwards compatible with earlier legacy systems; (iv)The creation of a framework for translating textual requirements to mathematical representations as constraints, rules and metrics involving both logical and numerical variables and the automatic (at least to 75%) allocation of the resulting specifications to components of the CPS and of processes, in a way that allows traceability. 5 These challenges have been listed here in the order of increasing difficulty both conceptually and in terms of arriving at implementable solutions. The order also reflects the extent to which the current state of affairs has made progress towards developing at least some initial instantiations of the desired frameworks. In this context, it is useful to compare with the advanced state of development of similar frameworks and their instantiations for synthesis and manufacturing of complex microelectronic VLSI chips including distributed ones, which have been available as integrated tools by several vendors for at least a decade. Regarding challenge (i) we have performed extensive work and research in this project towards developing model-based systems engineering (MBSE) procedures for the design, integration, testing and operational management of cyber-physical systems, that is, physical systems with cyber potentially embedded in every physical component. Thus in the Framework, described in this report, for standards for integrated modeling hubs for CPS, MBSE methods and tools are prominent. Regarding the search for a framework for standards for CPS this selection has the additional advantage that it is also emerging as an accepted framework for systems engineering by all industry sectors with substantial interest in CPS [3, 7]. Regarding challenge (ii) we have performed extensive work and research in this project towards developing the foundations for such an integration, and we have developed and demonstrated the first ever integration of a powerful tradeoff analysis tool (and methodology) with our SysMLIntegrated system modeling environments for CPS synthesis [3, 7]. Primary applications of interest that we have instantiated this framework are: microgrids and power grids, wireless sensor networks (WSN) and applications to Smart Grid, energy efficient buildings, microrobotics and collaborative robotics, and the overarching (for all these applications) security and trust issues including our pioneering and innovative work on compositional security systems. A key concept here is the integration of multi-criteria, multi constraint optimization with constrained based reasoning. Regarding challenge (iii) we have only developed the conceptual Framework, as any required instantiations will require substantial commercial grade software development beyond the scope of this project. It is clear however that object-relational databases and database mediators (for both data and semantics) will have to be employed. Regarding challenge (iv) we have developed a Framework for checking and validating specifications, after they have been translated to their mathematical representations as constraints and metrics with logical and numerical variables. Various multi-criteria optimization, constrained based reasoning, model checking and automatic theorem proving tools will have to be combined. The automatic annotation of the system blocks with requirements and parameter specifications remains an open challenge.Research supported in part by Cooperative Agreement, NIST 70NANB11H148, to the University of Maryland College Park

    Liquidity in Credit Networks with Constrained Agents

    Full text link
    In order to scale transaction rates for deployment across the global web, many cryptocurrencies have deployed so-called "Layer-2" networks of private payment channels. An idealized payment network behaves like a Credit Network, a model for transactions across a network of bilateral trust relationships. Credit Networks capture many aspects of traditional currencies as well as new virtual currencies and payment mechanisms. In the traditional credit network model, if an agent defaults, every other node that trusted it is vulnerable to loss. In a cryptocurrency context, trust is manufactured by capital deposits, and thus there arises a natural tradeoff between network liquidity (i.e. the fraction of transactions that succeed) and the cost of capital deposits. In this paper, we introduce constraints that bound the total amount of loss that the rest of the network can suffer if an agent (or a set of agents) were to default - equivalently, how the network changes if agents can support limited solvency guarantees. We show that these constraints preserve the analytical structure of a credit network. Furthermore, we show that aggregate borrowing constraints greatly simplify the network structure and in the payment network context achieve the optimal tradeoff between liquidity and amount of escrowed capital.Comment: To be published in TheWebConf 202

    19th SC@RUG 2022 proceedings 2021-2022

    Get PDF
    • …
    corecore