109,567 research outputs found

    Safety Analysis versus Type Inference

    Get PDF
    Safety analysis is an algorithm for determining if a term in the untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different perspectives, however. Safety analysis is based on closure analysis, whereas type inference attempts to assign a type to all subterms.In this paper we prove that safety analysis is sound, relative to both a strict and a lazy operational semantics, and superior to type inference, in the sense that it accepts strictly more safe lambda terms.The latter result may indicate the relative potentials of static program analyses based on respectively closure analysis and type inference

    Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler

    Get PDF
    Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make such development easier. Tscript comprises such a system. Tscript is an extension of JavaScript that provides multithreading support along with intent specification. These intents allow a programmer to specify how parts of the program interact in a multithreaded context. However, enforcing intents requires run-time memory checks which can be inefficient. This thesis implements an optimization in the Tscript compiler that seeks to improve this inefficiency through static analysis. Our approach utilizes both type inference and dataflow analysis to eliminate unnecessary run-time checks

    On the Statistical Modeling and Analysis of Repairable Systems

    Full text link
    We review basic modeling approaches for failure and maintenance data from repairable systems. In particular we consider imperfect repair models, defined in terms of virtual age processes, and the trend-renewal process which extends the nonhomogeneous Poisson process and the renewal process. In the case where several systems of the same kind are observed, we show how observed covariates and unobserved heterogeneity can be included in the models. We also consider various approaches to trend testing. Modern reliability data bases usually contain information on the type of failure, the type of maintenance and so forth in addition to the failure times themselves. Basing our work on recent literature we present a framework where the observed events are modeled as marked point processes, with marks labeling the types of events. Throughout the paper the emphasis is more on modeling than on statistical inference.Comment: Published at http://dx.doi.org/10.1214/088342306000000448 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Long-Term On-Board Prediction of People in Traffic Scenes under Uncertainty

    Full text link
    Progress towards advanced systems for assisted and autonomous driving is leveraging recent advances in recognition and segmentation methods. Yet, we are still facing challenges in bringing reliable driving to inner cities, as those are composed of highly dynamic scenes observed from a moving platform at considerable speeds. Anticipation becomes a key element in order to react timely and prevent accidents. In this paper we argue that it is necessary to predict at least 1 second and we thus propose a new model that jointly predicts ego motion and people trajectories over such large time horizons. We pay particular attention to modeling the uncertainty of our estimates arising from the non-deterministic nature of natural traffic scenes. Our experimental results show that it is indeed possible to predict people trajectories at the desired time horizons and that our uncertainty estimates are informative of the prediction error. We also show that both sequence modeling of trajectories as well as our novel method of long term odometry prediction are essential for best performance.Comment: CVPR 201

    Type Checking and Whole-program Inference for Value Range Analysis

    Get PDF
    Value range analysis is important in many software domains for ensuring the safety and reliability of a program and is a crucial facet in software development. The resulting information can be used in optimizations such as redundancy elimination, dead code elimination, instruction selection, and improve the safety of programs. This thesis explores the use of static analysis with type systems for value range analysis. Properly formalized type systems can provide mathematical guarantees for the correctness of a program at compile time. This thesis presents (1) a novel type system, the Narrowing and Widening Checker, (2) a whole-program type inference, the Value Inference for Integral Values, (3) a units-of-measurement type system, PUnits, and (4) an improved algorithm to statically analyze the data-flow of programs. The Narrowing and Widening Checker is a type system that prevents loss of information during narrowing conversion of primitive integral data types and automatically distinguishes the signedness of variables to eliminate the ambiguity of a widening conversion from type \!byte! and \!short! to type \!int!. This additional type system ensures soundness in programs by restricting operations that violate the defined type rules. While type checking verifies whether the given type declarations are consistent with their use, type inference automatically finds the properties at each location in the program, and reduces the annotation burden of the developer. The Value Inference for Integral Values is a constraint-based whole-program type inference for integral analysis. It supports the relevant type qualifiers used by the Narrowing and Widening type system, and reduces the annotation burden when using the Narrowing and Widening Checker. Value Inference can infer types in two modes: (1) ensure a valid integral typing exists, and (2) annotate a program with precise and relevant types. Annotation mode allows human inspection and is essential since having a valid typing does not guarantee that the inferred specification expresses design intent. PUnits is a type system for expressive units of measurement types and a precise, whole-program inference approach for these types. This thesis presents a new type qualifier for this type system to handle cases where the method return and method parameter type are context-sensitive to the method receiver type. This thesis also discusses the related work and the benefits and trade-offs of using PUnits versus existing Java unit libraries, and demonstrates how PUnits can enable Java developers to reap the performance benefits of using primitive types instead of abstract data types for unit-wise consistent scientific computations in real-world projects. The Dataflow Framework is a data-flow analysis for Java used to evaluate the values at each program location. Data-flow analysis is considered a terminating, imprecise abstract interpretation of a program and many false-positives are issued by the Narrowing and Widening Checker due to its imprecision. Three improvements to the algorithm in the framework are presented to increase the precision of the analysis: (1) implementing a dead-branch analysis, (2) proposing a path-sensitive analysis, and (3) discussing how loop precision can be improved. The Narrowing and Widening Checker is evaluated on 22 of the Apache Commons projects with a total of 224k lines of code. Out of these projects, 18 projects failed with 717 errors. The Value Inference for Integral Values is evaluated on these 18 Apache Commons projects. Out of these projects, 5 projects are successfully evaluated to SAT and the Value Inference inferred 10639 annotations. The 13 projects that are evaluated to UNSAT are manually examined and all of them contain a real narrowing error. Manual annotations are added to 5 of these projects to resolve the reported errors. In these 5 projects, the Narrowing and Widening Checker detects 69 real errors and 26 false-positives, with a false-positive rate of 37.7\%. The type system performs adequately with a compilation time overhead of 5.188x for the Narrowing and Widening Checker and 24.43x for the Value Inference. These projects are then evaluated with the addition of dead-branch analysis to the framework; the additional evaluation time is negligible. Its performance is suitable for use in a real-world software development environment. All the presented type systems build on techniques from type qualifier systems and constraint-based type inference. Our implementation and evaluation of these type systems show that these techniques are necessary and are effective in ensuring the correctness of real-world programs

    Evaluating the Expertise of Experts

    Get PDF
    Professor Shrader-Frechette maintains that a rigid distinction between risk assessment and risk management is unwise. Concerned about procedural fairness, she argues that the public should have a voice in both
    corecore