21 research outputs found

    How to Handle Assumptions in Synthesis

    Full text link
    The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    Synthesizing Robust Systems with RATSY

    Full text link
    Specifications for reactive systems often consist of environment assumptions and system guarantees. An implementation should not only be correct, but also robust in the sense that it behaves reasonably even when the assumptions are (temporarily) violated. We present an extension of the requirements analysis and synthesis tool RATSY that is able to synthesize robust systems from GR(1) specifications, i.e., system in which a finite number of safety assumption violations is guaranteed to induce only a finite number of safety guarantee violations. We show how the specification can be turned into a two-pair Streett game, and how a winning strategy corresponding to a correct and robust implementation can be computed. Finally, we provide some experimental results.Comment: In Proceedings SYNT 2012, arXiv:1207.055

    Designing reliable cyber-physical systems overview associated to the special session at FDL’16

    Get PDF
    CPS, that consist of a cyber part – a computing system – and a physical part – the system in the physical environment – as well as the respective interfaces between those parts, are omnipresent in our daily lives. The application in the physical environment drives the overall requirements that must be respected when designing the computing system. Here, reliability is a core aspect where some of the most pressing design challenges are: • monitoring failures throughout the computing system, • determining the impact of failures on the application constraints, and • ensuring correctness of the computing system with respect to application-driven requirements rooted in the physical environment. This paper provides an overview of techniques discussed in the special session to tackle these challenges throughout the stack of layers of the computing system while tightly coupling the design methodology to the physical requirements.</p

    The Second Reactive Synthesis Competition (SYNTCOMP 2015)

    Get PDF
    We report on the design and results of the second reactive synthesis competition (SYNTCOMP 2015). We describe our extended benchmark library, with 6 completely new sets of benchmarks, and additional challenging instances for 4 of the benchmark sets that were already used in SYNTCOMP 2014. To enhance the analysis of experimental results, we introduce an extension of our benchmark format with meta-information, including a difficulty rating and a reference size for solutions. Tools are evaluated on a set of 250 benchmarks, selected to provide a good coverage of benchmarks from all classes and difficulties. We report on changes of the evaluation scheme and the experimental setup. Finally, we describe the entrants into SYNTCOMP 2015, as well as the results of our experimental evaluation. In our analysis, we emphasize progress over the tools that participated last year.Comment: In Proceedings SYNT 2015, arXiv:1602.0078

    The 3rd Reactive Synthesis Competition (SYNTCOMP 2016): Benchmarks, Participants & Results

    Get PDF
    We report on the benchmarks, participants and results of the third reactive synthesis competition(SYNTCOMP 2016). The benchmark library of SYNTCOMP 2016 has been extended to benchmarks in the new LTL-based temporal logic synthesis format (TLSF), and 2 new sets of benchmarks for the existing AIGER-based format for safety specifications. The participants of SYNTCOMP 2016 can be separated according to these two classes of specifications, and we give an overview of the 6 tools that entered the competition in the AIGER-based track, and the 3 participants that entered the TLSF-based track. We briefly describe the benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP 2016. Finally, we present and analyze the results of our experimental evaluation, including a comparison to participants of previous competitions and a legacy tool.Comment: In Proceedings SYNT 2016, arXiv:1611.0717
    corecore