332 research outputs found

    Symbolic Implementation of Connectors in BIP

    Full text link
    BIP is a component framework for constructing systems by superposing three layers of modeling: Behavior, Interaction, and Priority. Behavior is represented by labeled transition systems communicating through ports. Interactions are sets of ports. A synchronization between components is possible through the interactions specified by a set of connectors. When several interactions are possible, priorities allow to restrict the non-determinism by choosing an interaction, which is maximal according to some given strict partial order. The BIP component framework has been implemented in a language and a tool-set. The execution of a BIP program is driven by a dedicated engine, which has access to the set of connectors and priority model of the program. A key performance issue is the computation of the set of possible interactions of the BIP program from a given state. Currently, the choice of the interaction to be executed involves a costly exploration of enumerative representations for connectors. This leads to a considerable overhead in execution times. In this paper, we propose a symbolic implementation of the execution model of BIP, which drastically reduces this overhead. The symbolic implementation is based on computing boolean representation for components, connectors, and priorities with an existing BDD package

    Overfitting in Synthesis: Theory and Practice (Extended Version)

    Get PDF
    In syntax-guided synthesis (SyGuS), a synthesizer's goal is to automatically generate a program belonging to a grammar of possible implementations that meets a logical specification. We investigate a common limitation across state-of-the-art SyGuS tools that perform counterexample-guided inductive synthesis (CEGIS). We empirically observe that as the expressiveness of the provided grammar increases, the performance of these tools degrades significantly. We claim that this degradation is not only due to a larger search space, but also due to overfitting. We formally define this phenomenon and prove no-free-lunch theorems for SyGuS, which reveal a fundamental tradeoff between synthesizer performance and grammar expressiveness. A standard approach to mitigate overfitting in machine learning is to run multiple learners with varying expressiveness in parallel. We demonstrate that this insight can immediately benefit existing SyGuS tools. We also propose a novel single-threaded technique called hybrid enumeration that interleaves different grammars and outperforms the winner of the 2018 SyGuS competition (Inv track), solving more problems and achieving a 5×5\times mean speedup.Comment: 24 pages (5 pages of appendices), 7 figures, includes proofs of theorem

    Synthesis Of Distributed Protocols From Scenarios And Specifications

    Get PDF
    Distributed protocols, typically expressed as stateful agents communicating asynchronously over buffered communication channels, are difficult to design correctly. This difficulty has spurred decades of research in the area of automated model-checking algorithms. In turn, practical implementations of model-checking algorithms have enabled protocol developers to prove the correctness of such distributed protocols. However, model-checking techniques are only marginally useful during the actual development of such protocols; typically as a debugging aid once a reasonably complete version of the protocol has already been developed. The actual development process itself is often tedious and requires the designer to reason about complex interactions arising out of concurrency and asynchrony inherent to such protocols. In this dissertation we describe program synthesis techniques which can be applied as an enabling technology to ease the task of developing such protocols. Specifically, the programmer provides a natural, but incomplete description of the protocol in an intuitive representation — such as scenarios or an incomplete protocol. This description specifies the behavior of the protocol in the common cases. The programmer also specifies a set of high-level formal requirements that a correct protocol is expected to satisfy. These requirements can include safety requirements as well as liveness requirements in the form of Linear Temporal Logic (LTL) formulas. We describe techniques to synthesize a correct protocol which is consistent with the common-case behavior specified by the programmer and also satisfies the high-level safety and liveness requirements set forth by the programmer. We also describe techniques for program synthesis in general, which serve to enable the solutions to distributed protocol synthesis that this dissertation explores

    Machine learning for function synthesis

    Get PDF
    Function synthesis is the process of automatically constructing functions that satisfy a given specification. The space of functions as well as the format of the specifications vary greatly with each area of application. In this thesis, we consider synthesis in the context of satisfiability modulo theories. Within this domain, the goal is to synthesise mathematical expressions that adhere to abstract logical formulas. These types of synthesis problems find many applications in the field of computer-aided verification. One of the main challenges of function synthesis arises from the combinatorial explosion in the number of potential candidates within a certain size. The hypothesis of this thesis is that machine learning methods can be applied to make function synthesis more tractable. The first contribution of this thesis is a Monte-Carlo based search method for function synthesis. The search algorithm uses machine learned heuristics to guide the search. This is part of a reinforcement learning loop that trains the machine learning models with data generated from previous search attempts. To increase the set of benchmark problems to train and test synthesis methods, we also present a technique for generating synthesis problems from pre-existing satisfiability modulo theories problems. We implement the Monte-Carlo based synthesis algorithm and evaluate it on standard synthesis benchmarks as well as our newly generated benchmarks. An experimental evaluation shows that the learned heuristics greatly improve on the baseline without trained models. Furthermore, the machine learned guidance demonstrates comparable performance to CVC5 and, in some experiments, even surpasses it. Next, this thesis explores the application of machine learning to more restricted function synthesis domains. We hypothesise that narrowing the scope enables the use of machine learning techniques that are not possible in the general setting. We test this hypothesis by considering the problem of ranking function synthesis. Ranking functions are used in program analysis to prove termination of programs by mapping consecutive program states to decreasing elements of a well-founded set. The second contribution of this dissertation is a novel technique for synthesising ranking functions, using neural networks. The key insight is that instead of synthesising a mathematical expression that represents a ranking function, we can train a neural network to act as a ranking function. Hence, the synthesis procedure is replaced by neural network training. We introduce Neural Termination Analysis as a framework that leverages this. We train neural networks from sampled execution traces of the program we want to prove terminating. We enforce the synthesis specifications of ranking functions using the loss function and network design. After training, we use symbolic reasoning to formally verify that the resulting function is indeed a correct ranking function for the target program. We demonstrate that our method succeeds in synthesising ranking functions for programs that are beyond the reach of state-of-the-art tools. This includes programs with disjunctions and non-linear expressions in the loop guards

    Type-driven Synthesis of Evolving Data Mode

    Get PDF
    Modern commercial software is often framed under the umbrella of data-centric applications. Data-centric applications define data as the main and permanent asset. These applications use a single data model for application functionality, data management, and analytical activities, which is built before the applications. Moreover, since applications are temporary, in contrast to data, there is the need to continuously evolve and change the data schema to accommodate new functionality. In this sense, the continuously evolving (rich) feature set that is expected of state-of-the-art applications is intrinsically bound by not only the amount of available data but also by its structure, its internal dependencies, and by the ability to transparently and uniformly grow and evolve data representations and their properties on the fly. The GOLEM project aims to produce new methods of program automation integrated in the development of data-centric applications in low-code frameworks. In this context, one of the key targets for automation is the data layer itself, encompassing the data layout and its integrity constraints, as well as validation and access control rules. The aim of this dissertation, which is integrated in GOLEM, is to develop a synthesis framework that, based on high-level specifications, correctly defines and evolves a rich data layer component by means of high-level operations. The construction of the framework was approached by defining a specification language to express richly-typed specifications, a target language which is the goal of synthesis and a type-directed synthesis procedure based on proof-search concepts. The range of real database operations the framework is able to synthesize is demonstrated through a case study. In a component-based synthesis style, with an extensible library of base operations on database tables (specified using the target language) in context, the case study shows that the synthesis framework is capable of expressing and solving a wide variety of data schema creation and evolution problems.Os sistemas modernos de software comercial são frequentemente caracterizados como aplicações centradas em dados. Estas aplicações definem os dados como o seu principal e persistente ativo, e utilizam um único modelo de dados para as suas funcionalidades, gestão de dados, e atividades analíticas. Além disso, uma vez que as aplicações são efémeras, contrariamente aos dados, existe a necessidade de continuamente evoluir o esquema de dados para introduzir novas funcionalidades. Neste sentido, o conjunto rico de características e em constante evolução que é esperado das aplicações modernas encontra-se restricto, não só pela quantidade de dados disponíveis, mas também pela sua estrutura, dependências internas, e a capacidade de crescer e evoluir a representação dos dados de uma forma uniforme e rápida. O projeto GOLEM tem como objetivo a produção de novos métodos de automação de programas integrado no desenvolvimento de aplicações centradas nos dados em sistemas low-code. Neste contexto, um dos objetivos principais de automação é a camada de dados, compreendendo a estrutura dos dados e as respectivas condições de integridade, como também as regras de validação e controlo de acessos. O objetivo desta dissertação, integrada no projeto GOLEM, é o desenvolvimento de um sistema de síntese que, baseado em especificações de alto nível, define e evolui corretamente uma camada de dados rica com recurso a operações de alto nível. A construção deste sistema baseia-se na definição de uma linguagem de especificação que permite definir especificações com tipos ricos, uma linguagem de expressões que é considerada o objetivo da síntese e um procedimento de síntese orientada pelos tipos. O espectro de operações reais de bases de dados que o sistema consegue sintetizar é demonstrado através de um caso de estudo. Com uma biblioteca extensível de operações sobre tabelas no contexto, o caso de estudo demonstra que o sistema de síntese é capaz de expressar e resolver uma grande variedade de problemas de criação e evolução de esquemas de dados
    corecore