5 research outputs found
The Reactive Synthesis Competition: SYNTCOMP 2016 and Beyond
We report on the design of the third reactive synthesis competition (SYNTCOMP
2016), including a major extension of the competition to specifications in full
linear temporal logic. We give a brief overview of the synthesis problem as
considered in SYNTCOMP, and present the rules of the competition in 2016, as
well as the ideas behind our design choices. Furthermore, we evaluate the
recent changes to the competition based on the experiences with SYNTCOMP 2016.
Finally, we give an outlook on further changes and extensions of the
competition that are planned for the future.Comment: In Proceedings SYNT 2016, arXiv:1611.0717
The 4th Reactive Synthesis Competition (SYNTCOMP 2017): Benchmarks, Participants & Results
We report on the fourth reactive synthesis competition (SYNTCOMP 2017). We
introduce two new benchmark classes that have been added to the SYNTCOMP
library, and briefly describe the benchmark selection, evaluation scheme and
the experimental setup of SYNTCOMP 2017. We present the participants of
SYNTCOMP 2017, with a focus on changes with respect to the previous years and
on the two completely new tools that have entered the competition. Finally, we
present and analyze the results of our experimental evaluation, including a
ranking of tools with respect to quantity and quality of solutions.Comment: In Proceedings SYNT 2017, arXiv:1711.10224. arXiv admin note: text
overlap with arXiv:1609.0050
The 3rd Reactive Synthesis Competition (SYNTCOMP 2016): Benchmarks, Participants & Results
We report on the benchmarks, participants and results of the third reactive
synthesis competition(SYNTCOMP 2016). The benchmark library of SYNTCOMP 2016
has been extended to benchmarks in the new LTL-based temporal logic synthesis
format (TLSF), and 2 new sets of benchmarks for the existing AIGER-based format
for safety specifications. The participants of SYNTCOMP 2016 can be separated
according to these two classes of specifications, and we give an overview of
the 6 tools that entered the competition in the AIGER-based track, and the 3
participants that entered the TLSF-based track. We briefly describe the
benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP
2016. Finally, we present and analyze the results of our experimental
evaluation, including a comparison to participants of previous competitions and
a legacy tool.Comment: In Proceedings SYNT 2016, arXiv:1611.0717
The Second Reactive Synthesis Competition (SYNTCOMP 2015)
We report on the design and results of the second reactive synthesis
competition (SYNTCOMP 2015). We describe our extended benchmark library, with 6
completely new sets of benchmarks, and additional challenging instances for 4
of the benchmark sets that were already used in SYNTCOMP 2014. To enhance the
analysis of experimental results, we introduce an extension of our benchmark
format with meta-information, including a difficulty rating and a reference
size for solutions. Tools are evaluated on a set of 250 benchmarks, selected to
provide a good coverage of benchmarks from all classes and difficulties. We
report on changes of the evaluation scheme and the experimental setup. Finally,
we describe the entrants into SYNTCOMP 2015, as well as the results of our
experimental evaluation. In our analysis, we emphasize progress over the tools
that participated last year.Comment: In Proceedings SYNT 2015, arXiv:1602.0078
EDACC - an advanced platform for the experiment design, administration and analysis of empirical algorithms
Abstract. The design, execution and analysis of experiments using heuristic algorithms can be a very time consuming task in the development of an algorithm. There are a lot of problems that have to be solved throughout this process. To speed up this process we have designed and implemented a framework called EDACC, which supports all the tasks that arise throughout the experimentation with algorithms. A graphical user interface together with a database facilitates archiving and management of solvers and problem instances. It also enables the creation of complex experiments and the generation of the computation jobs needed to perform the experiment. The task of running the jobs on an arbitrary computer system (or computer cluster or grid) is taken by a compute client, which is designed to increase computation throughput to a maximum. Real-time monitoring of running jobs can be done with the GUI or with a web frontend, both of which provide a wide variety of descriptive statistics and statistic testing to analyze the results. The web frontend also provides all the tools needed for the organization and execution of solver competitions.