9,474 research outputs found

    Summary-based inference of quantitative bounds of live heap objects

    Get PDF
    This article presents a symbolic static analysis for computing parametric upper bounds of the number of simultaneously live objects of sequential Java-like programs. Inferring the peak amount of irreclaimable objects is the cornerstone for analyzing potential heap-memory consumption of stand-alone applications or libraries. The analysis builds method-level summaries quantifying the peak number of live objects and the number of escaping objects. Summaries are built by resorting to summaries of their callees. The usability, scalability and precision of the technique is validated by successfully predicting the object heap usage of a medium-size, real-life application which is significantly larger than other previously reported case-studies.Fil: Braberman, Victor Adrian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Garbervetsky, Diego David. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Hym, Samuel. Universite Lille 3; FranciaFil: Yovine, Sergio Fabian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Soft Contract Verification

    Full text link
    Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time. To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can't be blamed. The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools---including type systems, flow analyzers, and model checkers---on their own benchmarks.Comment: ICFP '14, September 1-6, 2014, Gothenburg, Swede

    Polynomial Size Analysis of First-Order Shapely Functions

    Get PDF
    We present a size-aware type system for first-order shapely function definitions. Here, a function definition is called shapely when the size of the result is determined exactly by a polynomial in the sizes of the arguments. Examples of shapely function definitions may be implementations of matrix multiplication and the Cartesian product of two lists. The type system is proved to be sound w.r.t. the operational semantics of the language. The type checking problem is shown to be undecidable in general. We define a natural syntactic restriction such that the type checking becomes decidable, even though size polynomials are not necessarily linear or monotonic. Furthermore, we have shown that the type-inference problem is at least semi-decidable (under this restriction). We have implemented a procedure that combines run-time testing and type-checking to automatically obtain size dependencies. It terminates on total typable function definitions.Comment: 35 pages, 1 figur

    Combining Static and Dynamic Contract Checking for Curry

    Full text link
    Static type systems are usually not sufficient to express all requirements on function calls. Hence, contracts with pre- and postconditions can be used to express more complex constraints on operations. Contracts can be checked at run time to ensure that operations are only invoked with reasonable arguments and return intended results. Although such dynamic contract checking provides more reliable program execution, it requires execution time and could lead to program crashes that might be detected with more advanced methods at compile time. To improve this situation for declarative languages, we present an approach to combine static and dynamic contract checking for the functional logic language Curry. Based on a formal model of contract checking for functional logic programming, we propose an automatic method to verify contracts at compile time. If a contract is successfully verified, dynamic checking of it can be omitted. This method decreases execution time without degrading reliable program execution. In the best case, when all contracts are statically verified, it provides trust in the software since crashes due to contract violations cannot occur during program execution.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Symbolic and analytic techniques for resource analysis of Java bytecode

    Get PDF
    Recent work in resource analysis has translated the idea of amortised resource analysis to imperative languages using a program logic that allows mixing of assertions about heap shapes, in the tradition of separation logic, and assertions about consumable resources. Separately, polyhedral methods have been used to calculate bounds on numbers of iterations in loop-based programs. We are attempting to combine these ideas to deal with Java programs involving both data structures and loops, focusing on the bytecode level rather than on source code

    Avoiding the Global Sort: A Faster Contour Tree Algorithm

    Get PDF
    We revisit the classical problem of computing the \emph{contour tree} of a scalar field f:MRf:\mathbb{M} \to \mathbb{R}, where M\mathbb{M} is a triangulated simplicial mesh in Rd\mathbb{R}^d. The contour tree is a fundamental topological structure that tracks the evolution of level sets of ff and has numerous applications in data analysis and visualization. All existing algorithms begin with a global sort of at least all critical values of ff, which can require (roughly) Ω(nlogn)\Omega(n\log n) time. Existing lower bounds show that there are pathological instances where this sort is required. We present the first algorithm whose time complexity depends on the contour tree structure, and avoids the global sort for non-pathological inputs. If CC denotes the set of critical points in M\mathbb{M}, the running time is roughly O(vClogv)O(\sum_{v \in C} \log \ell_v), where v\ell_v is the depth of vv in the contour tree. This matches all existing upper bounds, but is a significant improvement when the contour tree is short and fat. Specifically, our approach ensures that any comparison made is between nodes in the same descending path in the contour tree, allowing us to argue strong optimality properties of our algorithm. Our algorithm requires several novel ideas: partitioning M\mathbb{M} in well-behaved portions, a local growing procedure to iteratively build contour trees, and the use of heavy path decompositions for the time complexity analysis

    Modelling manure NPK flows in organic farming systems to minimise nitrate leaching, ammonia volatilization and nitrous oxide emissions (OF0197)

    Get PDF
    Manure is an important source of organic matter and nutrients in organic farming systems, principally nitrogen (N), phosphorus (P) and potassium (K). Careful management is required during storage, handling and land-spreading to (a) ensure the most efficient use of the nutrients in the farming system and (b) to limit emissions of nitrate (NO3), ammonia (NH3), nitrous oxide (N2O), methane (CH4) and P to the wider environment. With a likely increase in the organically farmed area, information is needed on best practices for manure management in organic systems to minimise the environmental impacts of these systems. The aim was that software would calculate NPK fluxes associated with each aspect of the livestock system, and provide options to explore the impact of management change at key stages in the manure management process. The end point was to be a working prototype model/decision support system (DSS), which we could be demonstrated to a group of organic farmers and used for discussion of the NPK flows in their systems. Most of the effort in this short-term project was spent on three aspects: 1. Developing databases and the underlying model calculations. 2. Developing the software for the prototype system. 3. Limited validation of the output. The two main challenges in the project were (a) allowing a quick and easy representation of the manure management system, which is often complex and (b) being able to represent complex interactions, simply but robustly. The Manure Model (MANMOD) DSS was developed to allow an iconographic-based model representation of individual farm manure management systems to be readily constructed from a library of system components using a 'drag and drop' operation. This allows the user to construct a diagram of connecting components or ‘nodes’ (e.g. manure source, housing system, storage system) which direct and limit the flow pathway of nutrients through the farming system. Each component or node represents a key stage of the system. Once the system has been constructed, pressing the calculation button calculates the following variates for each component of the system: output (i.e. the amounts of N, P and K that will be transferred from that component of the system to the next); balance (i.e. the amount residing in that component of the system); losses (gaseous and ‘leachate’). Workshops were held at the start and end of the project. The following observations were made as a result of this exercise: - The approach is a relatively quick and simple way of constructing manure management systems. However, it is still quite complex, given the complexity of many management systems. - It may be that it is a better tool for advisers so that they can use it for several clients and become more familiar with the tool, compared with a farmer who might use it as a one-off during planning. - Even at its simplest, some detailed information is required – and in units that the farmer may not be familiar with. For example, washdown volume for the hardstanding, amount of straw (kg/animal/month), etc. However, this is not really a reason for not pursuing this information if it will provide an improvement in management. - One value is the option to scenario test. However, this is reliant on the model being sufficiently refined to be able to fairly represent the changes in response to the system. The aim of the project was to produce a prototype system. We have done this, but because of the complexity of the systems that we are trying to represent, we recognise that much more detailed validation of the model is required before it can be disseminated. There are now several Defra-funded studies that could be used in the next phase of the work. (A more detailed summary is available at the start of the main report
    corecore