9,497 research outputs found

    Abstract State Machines 1988-1998: Commented ASM Bibliography

    Get PDF
    An annotated bibliography of papers which deal with or use Abstract State Machines (ASMs), as of January 1998.Comment: Also maintained as a BibTeX file at http://www.eecs.umich.edu/gasm

    Mechanized semantics

    Get PDF
    The goal of this lecture is to show how modern theorem provers---in this case, the Coq proof assistant---can be used to mechanize the specification of programming languages and their semantics, and to reason over individual programs and over generic program transformations, as typically found in compilers. The topics covered include: operational semantics (small-step, big-step, definitional interpreters); a simple form of denotational semantics; axiomatic semantics and Hoare logic; generation of verification conditions, with application to program proof; compilation to virtual machine code and its proof of correctness; an example of an optimizing program transformation (dead code elimination) and its proof of correctness

    Semi-Weakly Supervised Learning for Label-efficient Semantic Segmentation in Expert-driven Domains

    Get PDF
    Unter Zuhilfenahme von Deep Learning haben semantische Segmentierungssysteme beeindruckende Ergebnisse erzielt, allerdings auf der Grundlage von überwachtem Lernen, das durch die Verfügbarkeit kostspieliger, pixelweise annotierter Bilder limitiert ist. Bei der Untersuchung der Performance dieser Segmentierungssysteme in Kontexten, in denen kaum Annotationen vorhanden sind, bleiben sie hinter den hohen Erwartungen, die durch die Performance in annotationsreichen Szenarien geschürt werden, zurück. Dieses Dilemma wiegt besonders schwer, wenn die Annotationen von lange geschultem Personal, z.B. Medizinern, Prozessexperten oder Wissenschaftlern, erstellt werden müssen. Um gut funktionierende Segmentierungsmodelle in diese annotationsarmen, Experten-angetriebenen Domänen zu bringen, sind neue Lösungen nötig. Zu diesem Zweck untersuchen wir zunächst, wie schlecht aktuelle Segmentierungsmodelle mit extrem annotationsarmen Szenarien in Experten-angetriebenen Bildgebungsdomänen zurechtkommen. Daran schließt sich direkt die Frage an, ob die kostspielige pixelweise Annotation, mit der Segmentierungsmodelle in der Regel trainiert werden, gänzlich umgangen werden kann, oder ob sie umgekehrt ein Kosten-effektiver Anstoß sein kann, um die Segmentierung in Gang zu bringen, wenn sie sparsam eingestetzt wird. Danach gehen wir auf die Frage ein, ob verschiedene Arten von Annotationen, schwache- und pixelweise Annotationen mit unterschiedlich hohen Kosten, gemeinsam genutzt werden können, um den Annotationsprozess flexibler zu gestalten. Experten-angetriebene Domänen haben oft nicht nur einen Annotationsmangel, sondern auch völlig andere Bildeigenschaften, beispielsweise volumetrische Bild-Daten. Der Übergang von der 2D- zur 3D-semantischen Segmentierung führt zu voxelweisen Annotationsprozessen, was den nötigen Zeitaufwand für die Annotierung mit der zusätzlichen Dimension multipliziert. Um zu einer handlicheren Annotation zu gelangen, untersuchen wir Trainingsstrategien für Segmentierungsmodelle, die nur preiswertere, partielle Annotationen oder rohe, nicht annotierte Volumina benötigen. Dieser Wechsel in der Art der Überwachung im Training macht die Anwendung der Volumensegmentierung in Experten-angetriebenen Domänen realistischer, da die Annotationskosten drastisch gesenkt werden und die Annotatoren von Volumina-Annotationen befreit werden, welche naturgemäß auch eine Menge visuell redundanter Regionen enthalten würden. Schließlich stellen wir die Frage, ob es möglich ist, die Annotations-Experten von der strikten Anforderung zu befreien, einen einzigen, spezifischen Annotationstyp liefern zu müssen, und eine Trainingsstrategie zu entwickeln, die mit einer breiten Vielfalt semantischer Information funktioniert. Eine solche Methode wurde hierzu entwickelt und in unserer umfangreichen experimentellen Evaluierung kommen interessante Eigenschaften verschiedener Annotationstypen-Mixe in Bezug auf deren Segmentierungsperformance ans Licht. Unsere Untersuchungen führten zu neuen Forschungsrichtungen in der semi-weakly überwachten Segmentierung, zu neuartigen, annotationseffizienteren Methoden und Trainingsstrategien sowie zu experimentellen Erkenntnissen, zur Verbesserung von Annotationsprozessen, indem diese annotationseffizient, expertenzentriert und flexibel gestaltet werden

    Book reports

    Get PDF

    Public evaluation of large projects : variational inequialities, bilevel programming and complementarity. A survey

    Get PDF
    Large projects evaluation rises well known difficulties because -by definition- they modify the current price system; their public evaluation presents additional difficulties because they modify too existing shadow prices without the project. This paper analyzes -first- the basic methodologies applied until late 80s., based on the integration of projects in optimization models or, alternatively, based on iterative procedures with information exchange between two organizational levels. New methodologies applied afterwards are based on variational inequalities, bilevel programming and linear or nonlinear complementarity. Their foundations and different applications related with project evaluation are explored. As a matter of fact, these new tools are closely related among them and can treat more complex cases involving -for example- the reaction of agents to policies or the existence of multiple agents in an environment characterized by common functions representing demands or constraints on polluting emissions

    Automatic autoprojection of recursive equations with global variables and abstract data types

    Get PDF
    AbstractSelf-applicable partial evaluation has been implemented for half a decade now, but many problems remain open. This paper addresses and solves the problems of automating call unfolding, having an open-ended set of operators, and processing global variables updated by side effects. The problems of computation duplication and termination of residual programs are addressed and solved: residual programs never duplicate computations of the source program; residual programs do not terminate more often than source programs.This paper describes the automatic autoprojector (self-applicable partial evaluator) Similix; it handles programs with user-defined primitive abstract data type operators which may process global variables. Abstract data types make it possible to hide actual representations of data and prevent specializing operators over these representations. The formally sound treatment of global variables makes Similix fit well in an applicative order programming environment.We present a new method for automatic call unfolding which is simpler, faster, and sometimes more effective than existing methods: it requires neither recursion analysis of the source program, nor call graph analysis of the residual program.To avoid duplicating computations and preserve termination properties, we introduce an abstract interpretation of the source program, abstract occurence counting analysis, which is performed during preprocessing. We express it formally and simplify it

    Types with potential: polynomial resource bounds via automatic amortized analysis

    Get PDF
    A primary feature of a computer program is its quantitative performance characteristics: the amount of resources such as time, memory, and power the program needs to perform its task. Concrete resource bounds for specific hardware have many important applications in software development but their manual determination is tedious and error-prone. This dissertation studies the problem of automatically determining concrete worst-case bounds on the quantitative resource consumption of functional programs. Traditionally, automatic resource analyses are based on recurrence relations. The difficulty of both extracting and solving recurrence relations has led to the development of type-based resource analyses that are compositional, modular, and formally verifiable. However, existing automatic analyses based on amortization or sized types can only compute bounds that are linear in the sizes of the arguments of a function. This work presents a novel type system that derives polynomial bounds from first-order functional programs. As pioneered by Hofmann and Jost for linear bounds, it relies on the potential method of amortized analysis. Types are annotated with multivariate resource polynomials, a rich class of functions that generalize non-negative linear combinations of binomial coefficients. The main theorem states that type derivations establish resource bounds that are sound with respect to the resource-consumption of programs which is formalized by a big-step operational semantics. Simple local type rules allow for an efficient inference algorithm for the type annotations which relies on linear constraint solving only. This gives rise to an analysis system that is fully automatic if a maximal degree of the bounding polynomials is given. The analysis is generic in the resource of interest and can derive bounds on time and space usage. The bounds are naturally closed under composition and eventually summarized in closed, easily understood formulas. The practicability of this automatic amortized analysis is verified with a publicly available implementation and a reproducible experimental evaluation. The experiments with a wide range of examples from functional programming show that the inference of the bounds only takes a couple of seconds in most cases. The derived heap-space and evaluation-step bounds are compared with the measured worst-case behavior of the programs. Most bounds are asymptotically tight, and the constant factors are close or even identical to the optimal ones. For the first time we are able to automatically and precisely analyze the resource consumption of involved programs such as quick sort for lists of lists, longest common subsequence via dynamic programming, and multiplication of a list of matrices with different, fitting dimensions

    A game-based approach towards human augmented image annotation.

    Get PDF
    PhDImage annotation is a difficult task to achieve in an automated way. In this thesis, a human-augmented approach to tackle this problem is discussed and suitable strategies are derived to solve it. The proposed technique is inspired by human-based computation in what is called “human-augmented” processing to overcome limitations of fully automated technology for closing the semantic gap. The approach aims to exploit what millions of individual gamers are keen to do, i.e. enjoy computer games, while annotating media. In this thesis, the image annotation problem is tackled by a game based framework. This approach combines image processing and a game theoretic model to gather media annotations. Although the proposed model behaves similar to a single player game model, the underlying approach has been designed based on a two-player model which exploits the player’s contribution to the game and previously recorded players to improve annotations accuracy. In addition, the proposed framework is designed to predict the player’s intention through Markovian and Sequential Sampling inferences in order to detect cheating and improve annotation performances. Finally, the proposed techniques are comprehensively evaluated with three different image datasets and selected representative results are reported
    • …
    corecore