2,545 research outputs found

    Search Techniques for Code Generation

    Get PDF
    This dissertation explores techniques that synthesize and generate program fragments and test inputs. The main goal of these techniques is to improve and support automation in program synthesis and test input generation. This is important because performing those processes manually is often tedious, time consuming and error prone. The main challenge that these techniques face is exploring the search space in efficient and scalable ways. In the first part of the dissertation, we present tools InSynth and PolySynth that interactively synthesize code fragments. They take as input a partial program and automatically extract type information, the desired type, and set of visible declarations. They use this input to synthesize ranked list of expressions with the desired type. Finally, they present the expressions to a developer in similar manner to code completions in modern IDEs. InSynth is the first tool that uses a complete algorithm to generate expressions with first class functions and higher order functions. We present the theoretical foundation of the InSynth problem, that is based on type inhabitation, and the type-based backward search algorithm that solves it. PolySynth uses type-driven, resolution based algorithm that considers polymorphic types (generics) to generate expressions. Furthermore, the uniqueness of both tools comes from the fact that their algorithms operate using corpus statistics. The statistics are used to steer the algorithms and the search space exploration towards the most relevant solutions. In the second part of the dissertation we present the tool anyCode that uses natural language input to synthesize expressions. As input it accepts English words or Java program language constructs. This allows a developer to encode her intuition about the desired expression using words or the expression that approximates the desired structure. Thanks to this flexibility, anyCode can also repair broken expressions. It uses a pipeline of natural language and related-word tools to analyze the input. This helps anyCode to identify the set of the most relevant components and to reduce the size of search space. To further reduce the size of search space and to create the most relevant expressions, anyCode uses two statistical models: unigram and probabilistic context free grammar. Finally, in the last part of the dissertation we present UDITA, a Java-like language with support for non-determinism, which allows a user to describe test generation programs. Test generation programs are run on a top of Java PathFinder (JPF), a popular explicit-state model checker, that has a built-in backtracking mechanism and supports non-determinism. Using UDITA programs, JPF generates test inputs. The first benefit of UDITA is that non-determinism empowers a user to describe many test inputs as easily as describing a single test input. The second benefit is that it gives a user more flexibility allowing her to describe test generation programs by arbitrarily combining filters and generators. UDITA reduces the size of search space using an algorithm that reduces the number of generated complex isomorphic structures and that delays non-deterministic choices

    Hoogle?: Constants and ?-abstractions in Petri-net-based Synthesis using Symbolic Execution

    Get PDF
    Type-directed component-based program synthesis is the task of automatically building a function with applications of available components and whose type matches a given goal type. Existing approaches to component-based synthesis, based on classical proof search, cannot deal with large sets of components. Recently, Hoogle+, a component-based synthesizer for Haskell, overcomes this issue by reducing the search problem to a Petri-net reachability problem. However, Hoogle+ cannot synthesize constants nor ?-abstractions, which limits the problems that it can solve. We present Hoogle?, an extension to Hoogle+ that brings constants and ?-abstractions to the search space, in two independent steps. First, we introduce the notion of wildcard component, a component that matches all types. This enables the algorithm to produce incomplete functions, i.e., functions containing occurrences of the wildcard component. Second, we complete those functions, by replacing each occurrence with constants or custom-defined ?-abstractions. We have chosen to find constants by means of an inference algorithm: we present a new unification algorithm based on symbolic execution that uses the input-output examples supplied by the user to compute substitutions for the occurrences of the wildcard. When compared to Hoogle+, Hoogle? can solve more kinds of problems, especially problems that require the generation of constants and ?-abstractions, without performance degradation

    Study to design and develop remote manipulator system

    Get PDF
    Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance

    Graceful Language Extensions and Interfaces

    No full text
    Grace is a programming language under development aimed at education. Grace is object-oriented, imperative, and block-structured, and intended for use in first- and second-year object-oriented programming courses. We present a number of language features we have designed for Grace and implemented in our self-hosted compiler. We describe the design of a pattern-matching system with object-oriented structure and minimal extension to the language. We give a design for an object-based module system, which we use to build dialects, a means of extending and restricting the language available to the programmer, and of implementing domain-specific languages. We show a visual programming interface that melds visual editing (Ă  la Scratch) with textual editing, and that uses our dialect system, and we give the results of a user experiment we performed to evaluate the usability of our interface

    Material windows and working stations. The discourse networks behind skeuomorphic interface in Pathfinder: Kingmaker

    Get PDF
    The article probes the intermedial structure of the skeuomorphic interface in Pathfinder: Kingmaker. The author indicates that intermedia research in game studies is often diachronically limited, focusing on material and semiotic interactions between “old” and “new” media. He proposes to open the field onto historically aware discoursive analysis and bases his method on Friedrich Kittler’s notion of “discourse networks”. This allows him to inspect the game in relation to technologically-founded networks that embody or bring into life specific modes of though and experience. During his analysis, he discovers that the interface design is involved with navigation devices in Late Medieval and Early Renaissance periods, Alberti’s windows as objects through with narrative spaces become visible, isometric modes of objective thinking, industrial and cybernetic notions of control, and the Xerox invention of the computer as a working environment.The article probes the intermedial structure of the skeuomorphic interface in Pathfinder: Kingmaker. The author indicates that intermedia research in game studies is often diachronically limited, focusing on material and semiotic interactions between “old” and “new” media. He proposes to open the field onto historically aware discoursive analysis and bases his method on Friedrich Kittler’s notion of “discourse networks”. This allows him to inspect the game in relation to technologically-founded networks that embody or bring into life specific modes of though and experience. During his analysis, he discovers that the interface design is involved with navigation devices in Late Medieval and Early Renaissance periods, Alberti’s windows as objects through with narrative spaces become visible, isometric modes of objective thinking, industrial and cybernetic notions of control, and the Xerox invention of the computer as a working environment

    The George-Anne

    Get PDF

    Contributions for improving debugging of kernel-level services in a monolithic operating system

    Get PDF
    Alors que la recherche sur la qualité du code des systÚmes a connu un formidable engouement, les systÚmes d exploitation sont encore aux prises avec des problÚmes de fiabilité notamment dus aux bogues de programmation au niveau des services noyaux tels que les pilotes de périphériques et l implémentation des systÚmes de fichiers. Des études ont en effet montré que chaque version du noyau Linux contient entre 600 et 700 fautes, et que la propension des pilotes de périphériques à contenir des erreurs est jusqu à sept fois plus élevée que toute autre partie du noyau. Ces chiffres suggÚrent que le code des services noyau n est pas suffisamment testé et que de nombreux défauts passent inaperçus ou sont difficiles à réparer par des programmeurs non-experts, ces derniers formant pourtant la majorité des développeurs de services. Cette thÚse propose une nouvelle approche pour le débogage et le test des services noyau. Notre approche est focalisée sur l interaction entre les services noyau et le noyau central en abordant la question des trous de sûreté dans le code de définition des fonctions de l API du noyau. Dans le contexte du noyau Linux, nous avons mis en place une approche automatique, dénommée Diagnosys, qui repose sur l analyse statique du code du noyau afin d identifier, classer et exposer les différents trous de sûreté de l API qui pourraient donner lieu à des fautes d exécution lorsque les fonctions sont utilisées dans du code de service écrit par des développeurs ayant une connaissance limitée des subtilités du noyau. Pour illustrer notre approche, nous avons implémenté Diagnosys pour la version 2.6.32 du noyau Linux. Nous avons montré ses avantages à soutenir les développeurs dans leurs activités de tests et de débogage.Despite the existence of an overwhelming amount of research on the quality of system software, Operating Systems are still plagued with reliability issues mainly caused by defects in kernel-level services such as device drivers and file systems. Studies have indeed shown that each release of the Linux kernel contains between 600 and 700 faults, and that the propensity of device drivers to contain errors is up to seven times higher than any other part of the kernel. These numbers suggest that kernel-level service code is not sufficiently tested and that many faults remain unnoticed or are hard to fix bynon-expert programmers who account for the majority of service developers. This thesis proposes a new approach to the debugging and testing of kernel-level services focused on the interaction between the services and the core kernel. The approach tackles the issue of safety holes in the implementation of kernel API functions. For Linux, we have instantiated the Diagnosys automated approach which relies on static analysis of kernel code to identify, categorize and expose the different safety holes of API functions which can turn into runtime faults when the functions are used in service code by developers with limited knowledge on the intricacies of kernel code. To illustrate our approach, we have implemented Diagnosys for Linux 2.6.32 and shown its benefits in supporting developers in their testing and debugging tasks.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    User modelling approach to computer based advice generation

    Get PDF

    Interactive Code Generation

    Get PDF
    This thesis presents two approaches to code generation (synthesis) along with a discussion of other related and influential works, their ideas and relations to these approaches. The common goal of these approaches is to efficiently and effectively assist developers in software development by synthesizing code for them and save their efforts. The two presented approaches differ in the nature of the synthesis problem they try to solve. They are targeted to be useful in different scenarios, apply different set of techniques and can even be complementary. The first approach to code synthesis relies on typing information of a program to define and drive the synthesis process. The main requirement imposes that synthesized solutions have the desired type. This can aid developers in many scenarios of modern software development which typically involves composing functionality from existing libraries which expose a rich API. Our observation is that the developer usually does not know the right combination for composing API calls but knows the type of the desired expression. With the basis being the well-known type inhabitation problem we introduce a succinct representation for type judgements that significantly speed up the search for type inhabitants. Our method finds multiple solutions and ranks them before offering them to the developer. We implemented this approach as a plugin for the Eclipse IDE for Scala. From the evaluation we concluded that this approach goes beyond available related techniques and can be very useful in practical software development. In the second approach, synthesis of code is driven by explicit specification of code in terms of (formal) specification. The goal is to allow the developer to specify a program, by giving formal description of its behavior, rather than writing the code - the actual implementation is synthesized automatically. The practical value of such synthesis is immediately clear since this problem is generally hard. The approach solves this problem by combining existing tools for code generation, verification and testing within the synthesis process, and applies techniques for speeding it up. Interesting modifications to the synthesis driven by types were made to allow synthesizing expressions lazily, on demand, by searching for solutions in an incremental fashion. Results of the evaluation on several examples show that the implementation can be effective and useful in practice, while the approach still offers a lot of room for improvements
    • 

    corecore