549 research outputs found

    A Fast Algorithm Finding the Shortest Reset Words

    Full text link
    In this paper we present a new fast algorithm finding minimal reset words for finite synchronizing automata. The problem is know to be computationally hard, and our algorithm is exponential. Yet, it is faster than the algorithms used so far and it works well in practice. The main idea is to use a bidirectional BFS and radix (Patricia) tries to store and compare resulted subsets. We give both theoretical and practical arguments showing that the branching factor is reduced efficiently. As a practical test we perform an experimental study of the length of the shortest reset word for random automata with nn states and 2 input letters. We follow Skvorsov and Tipikin, who have performed such a study using a SAT solver and considering automata up to n=100n=100 states. With our algorithm we are able to consider much larger sample of automata with up to n=300n=300 states. In particular, we obtain a new more precise estimation of the expected length of the shortest reset word 2.5n5\approx 2.5\sqrt{n-5}.Comment: COCOON 2013. The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-642-38768-5_1

    Synchronizing heuristics for weakly connected automata with various topologies

    Get PDF
    Since the problem of finding a shortest synchronizing sequence for an automaton is known to be NP-hard, heuristics algorithms are used to find synchronizing sequences. There are several heuristic algorithms in the literature for this purpose. However, even the most efficient heuristic algorithm in the literature has a quadratic complexity in terms of the number of states of the automaton, and therefore can only scale up to a couple of thousands of states. It was also shown before that if an automaton is not strongly connected, then these heuristic algorithms can be used on each strongly connected component separately. This approach speeds up these heuristic algorithms and allows them to scale to much larger number of states easily. In this paper, we investigate the effect of the topology of the automaton on the performance increase obtained by these heuristic algorithms. To this end, we consider various topologies and provide an extensive experimental study on the performance increase obtained on the existing heuristic algorithms. Depending on the size and the number of components, we obtain speed-up values as high as 10000x and more

    Boosting expensive synchronizing heuristics

    Get PDF
    For automata, synchronization, the problem of bringing an automaton to a particular state regardless of its initial state, is important. It has several applications in practice and is related to a fifty-year-old conjecture on the length of the shortest synchronizing word. Although using shorter words increases the effectiveness in practice, finding a shortest one (which is not necessarily unique) is NP-hard. For this reason, there exist various heuristics in the literature. However, high-quality heuristics such as SynchroP producing relatively shorter sequences are very expensive and can take hours when the automaton has tens of thousands of states. The SynchroP heuristic has been frequently used as a benchmark to evaluate the performance of the new heuristics. In this work, we first improve the runtime of SynchroP and its variants by using algorithmic techniques. We then focus on adapting SynchroP for many-core architectures, and overall, we obtain more than 1000× speedup on GPUs compared to naive sequential implementation that has been frequently used as a benchmark to evaluate new heuristics in the literature. We also propose two SynchroP variants and evaluate their performance

    Using synchronizing heuristics to construct homing sequences

    Get PDF
    Computing a shortest synchronizing sequence of an automaton is an NP-Hard problem. There are well-known heuristics to find short synchronizing sequences. Finding a shortest homing sequence is also an NP-Hard problem. Unlike existing heuristics to find synchronizing sequences, homing heuristics are not widely studied. In this paper, we discover a relation between synchronizing and homing sequences by creating an automaton called homing automaton. By applying synchronizing heuristics on this automaton we get short homing sequences. Furthermore, we adapt some of the synchronizing heuristics to construct homing sequences

    An Improved Algorithm for Finding the Shortest Synchronizing Words

    Get PDF
    A synchronizing word of a deterministic finite complete automaton is a word whose action maps every state to a single one. Finding a shortest or a short synchronizing word is a central computational problem in the theory of synchronizing automata and is applied in other areas such as model-based testing and the theory of codes. Because the problem of finding a shortest synchronizing word is computationally hard, among exact algorithms only exponential ones are known. We redesign the previously fastest known exact algorithm based on the bidirectional breadth-first search and improve it with respect to time and space in a practical sense. We develop new algorithmic enhancements and adapt the algorithm to multithreaded and GPU computing. Our experiments show that the new algorithm is multiple times faster than the previously fastest one and its advantage quickly grows with the hardness of the problem instance. Given a modest time limit, we compute the lengths of the shortest synchronizing words for random binary automata up to 570 states, significantly beating the previous record. We refine the experimental estimation of the average reset threshold of these automata. Finally, we develop a general computational package devoted to the problem, where an efficient and practical implementation of our algorithm is included, together with several well-known heuristics

    An Improved Algorithm for Finding the Shortest Synchronizing Words

    Get PDF
    A synchronizing word of a deterministic finite complete automaton is a word whose action maps every state to a single one. Finding a shortest or a short synchronizing word is a central computational problem in the theory of synchronizing automata and is applied in other areas such as model-based testing and the theory of codes. Because the problem of finding a shortest synchronizing word is computationally hard, among \emph{exact} algorithms only exponential ones are known. We redesign the previously fastest known exact algorithm based on the bidirectional breadth-first search and improve it with respect to time and space in a practical sense. We develop new algorithmic enhancements and adapt the algorithm to multithreaded and GPU computing. Our experiments show that the new algorithm is multiple times faster than the previously fastest one and its advantage quickly grows with the hardness of the problem instance. Given a modest time limit, we compute the lengths of the shortest synchronizing words for random binary automata up to 570 states, significantly beating the previous record. We refine the experimental estimation of the average reset threshold of these automata. Finally, we develop a general computational package devoted to the problem, where an efficient and practical implementation of our algorithm is included, together with several well-known heuristics.Comment: Full version of ESA 2022 pape

    Fitness landscape of the cellular automata majority problem: View from the Olympus

    Get PDF
    In this paper we study cellular automata (CAs) that perform the computational Majority task. This task is a good example of what the phenomenon of emergence in complex systems is. We take an interest in the reasons that make this particular fitness landscape a difficult one. The first goal is to study the landscape as such, and thus it is ideally independent from the actual heuristics used to search the space. However, a second goal is to understand the features a good search technique for this particular problem space should possess. We statistically quantify in various ways the degree of difficulty of searching this landscape. Due to neutrality, investigations based on sampling techniques on the whole landscape are difficult to conduct. So, we go exploring the landscape from the top. Although it has been proved that no CA can perform the task perfectly, several efficient CAs for this task have been found. Exploiting similarities between these CAs and symmetries in the landscape, we define the Olympus landscape which is regarded as the ''heavenly home'' of the best local optima known (blok). Then we measure several properties of this subspace. Although it is easier to find relevant CAs in this subspace than in the overall landscape, there are structural reasons that prevent a searcher from finding overfitted CAs in the Olympus. Finally, we study dynamics and performance of genetic algorithms on the Olympus in order to confirm our analysis and to find efficient CAs for the Majority problem with low computational cost

    Algorithmic optimization and parallelization of eppstein's synchronizing heuristic

    Get PDF
    Testing is the most expensive and time consuming phase in the development of complex systems. Model–based testing is an approach that can be used to automate the generation of high quality test suites, which is the most challenging part of testing. Formal models, such as finite state machines or automata, have been used as specifications from which the test suites can be automatically generated. The tests are applied after the system is synchronized to a particular state, which can be accomplished by using a synchronizing word. Computing a shortest synchronizing word is of interest for practical purposes, e.g. for a shorter testing time. However, computing a shortest synchronizing word is an NP–hard problem. Therefore, heuristics are used to compute short synchronizing words. GREEDY is one of the fastest synchronizing heuristics currently known. In this thesis, we present approaches to accelerate GREEDY algorithm. Firstly, we focus on parallelization of GREEDY. Second, we propose a lazy execution of the preprocessing phase of the algorithm, by postponing the preparation of the required information until it is to be used in the reset word generation phase. We suggest other algorithmic enhancements as well for the implementation of the heuristics. Our experimental results show that depending on the automata size, GREEDY can be made 500⇥ faster. The suggested improvements become more effective as the size of the automaton increases
    corecore