676 research outputs found

    Forgetting 1-Limited Automata

    Full text link
    We introduce and investigate forgetting 1-limited automata, which are single-tape Turing machines that, when visiting a cell for the first time, replace the input symbol in it by a fixed symbol, so forgetting the original contents. These devices have the same computational power as finite automata, namely they characterize the class of regular languages. We study the cost in size of the conversions of forgetting 1-limited automata, in both nondeterministic and deterministic cases, into equivalent one-way nondeterministic and deterministic automata, providing optimal bounds in terms of exponential or superpolynomial functions. We also discuss the size relationships with two-way finite automata. In this respect, we prove the existence of a language for which forgetting 1-limited automata are exponentially larger than equivalent minimal deterministic two-way automata.Comment: In Proceedings NCMA 2023, arXiv:2309.0733

    A Modular Formalization of Reversibility for Concurrent Models and Languages

    Full text link
    Causal-consistent reversibility is the reference notion of reversibility for concurrency. We introduce a modular framework for defining causal-consistent reversible extensions of concurrent models and languages. We show how our framework can be used to define reversible extensions of formalisms as different as CCS and concurrent X-machines. The generality of the approach allows for the reuse of theories and techniques in different settings.Comment: In Proceedings ICE 2016, arXiv:1608.0313

    Infinite games with finite knowledge gaps

    Full text link
    Infinite games where several players seek to coordinate under imperfect information are deemed to be undecidable, unless the information is hierarchically ordered among the players. We identify a class of games for which joint winning strategies can be constructed effectively without restricting the direction of information flow. Instead, our condition requires that the players attain common knowledge about the actual state of the game over and over again along every play. We show that it is decidable whether a given game satisfies the condition, and prove tight complexity bounds for the strategy synthesis problem under ω\omega-regular winning conditions given by parity automata.Comment: 39 pages; 2nd revision; submitted to Information and Computatio

    Generating Strong Diversity of Opinions: Agent Models of Continuous Opinion Dynamics

    Get PDF
    Opinion dynamics is the study of how opinions in a group of individuals change over time. A goal of opinion dynamics modelers has long been to find a social science-based model that generates strong diversity -- smooth, stable, possibly multi-modal distributions of opinions. This research lays the foundations for and develops such a model. First, a taxonomy is developed to precisely describe agent schedules in an opinion dynamics model. The importance of scheduling is shown with applications to generalized forms of two models. Next, the meta-contrast influence field (MIF) model is defined. It is rooted in self-categorization theory and improves on the existing meta-contrast model by providing a properly scaled, continuous influence basis. Finally, the MIF-Local Repulsion (MIF-LR) model is developed and presented. This augments the MIF model with a formulation of uniqueness theory. The MIF-LR model generates strong diversity. An application of the model shows that partisan polarization can be explained by increased non-local social ties enabled by communications technology

    On state-alternating context-free grammars

    Get PDF
    AbstractState-alternating context-free grammars are introduced, and the language classes obtained from them are compared to the classes of the Chomsky hierarchy as well as to some well-known complexity classes. In particular, state-alternating context-free grammars are compared to alternating context-free grammars (Theoret. Comput. Sci. 67 (1989) 75–85) and to alternating pushdown automata. Further, various derivation strategies are considered, and their influence on the expressive power of (state-) alternating context-free grammars is investigated

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version
    corecore