676 research outputs found
Forgetting 1-Limited Automata
We introduce and investigate forgetting 1-limited automata, which are
single-tape Turing machines that, when visiting a cell for the first time,
replace the input symbol in it by a fixed symbol, so forgetting the original
contents. These devices have the same computational power as finite automata,
namely they characterize the class of regular languages. We study the cost in
size of the conversions of forgetting 1-limited automata, in both
nondeterministic and deterministic cases, into equivalent one-way
nondeterministic and deterministic automata, providing optimal bounds in terms
of exponential or superpolynomial functions. We also discuss the size
relationships with two-way finite automata. In this respect, we prove the
existence of a language for which forgetting 1-limited automata are
exponentially larger than equivalent minimal deterministic two-way automata.Comment: In Proceedings NCMA 2023, arXiv:2309.0733
A Modular Formalization of Reversibility for Concurrent Models and Languages
Causal-consistent reversibility is the reference notion of reversibility for
concurrency. We introduce a modular framework for defining causal-consistent
reversible extensions of concurrent models and languages. We show how our
framework can be used to define reversible extensions of formalisms as
different as CCS and concurrent X-machines. The generality of the approach
allows for the reuse of theories and techniques in different settings.Comment: In Proceedings ICE 2016, arXiv:1608.0313
Recommended from our members
Considerations in designing a cybernetic simple 'learning' model; and an overview of the problem of modelling learning
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Learning is viewed as a central feature of living systems and must be manifested in any artifact that claims to exhibit general intelligence. The central aims of the thesis are twofold: (1) - To review and critically assess the empirical and theoretical aspects of learning as have been addressed in a multitude of disciplines, with the aim of extracting fundamental features and elements. (2) - To develop a more systematic approach to the cybernetic modelling of learning than has been achieved hitherto. In pursuit of aim (1) above the following discussions are included: Historical and Philosophical backgrounds; Natural learning, both physiological and psychological aspects; Hierarchies of learning identified in the evolutionary, functional and developmental senses; An extensive section on the general problem of modelling of learning and the formal tools, is included as a link between aims (1) and (2). Following this a systematic and historically oriented study of cybernetic and other related approaches to the problem of modelling of learning is presented. This then leads to the development of a state-of-the-art general purpose experimental cybernetic learning model. The programming and use of this model is also fully described, including an elaborate scheme for the manifestation of simple learning
Infinite games with finite knowledge gaps
Infinite games where several players seek to coordinate under imperfect
information are deemed to be undecidable, unless the information is
hierarchically ordered among the players.
We identify a class of games for which joint winning strategies can be
constructed effectively without restricting the direction of information flow.
Instead, our condition requires that the players attain common knowledge about
the actual state of the game over and over again along every play.
We show that it is decidable whether a given game satisfies the condition,
and prove tight complexity bounds for the strategy synthesis problem under
-regular winning conditions given by parity automata.Comment: 39 pages; 2nd revision; submitted to Information and Computatio
Generating Strong Diversity of Opinions: Agent Models of Continuous Opinion Dynamics
Opinion dynamics is the study of how opinions in a group of individuals change over time. A goal of opinion dynamics modelers has long been to find a social science-based model that generates strong diversity -- smooth, stable, possibly multi-modal distributions of opinions. This research lays the foundations for and develops such a model. First, a taxonomy is developed to precisely describe agent schedules in an opinion dynamics model. The importance of scheduling is shown with applications to generalized forms of two models. Next, the meta-contrast influence field (MIF) model is defined. It is rooted in self-categorization theory and improves on the existing meta-contrast model by providing a properly scaled, continuous influence basis. Finally, the MIF-Local Repulsion (MIF-LR) model is developed and presented. This augments the MIF model with a formulation of uniqueness theory. The MIF-LR model generates strong diversity. An application of the model shows that partisan polarization can be explained by increased non-local social ties enabled by communications technology
On state-alternating context-free grammars
AbstractState-alternating context-free grammars are introduced, and the language classes obtained from them are compared to the classes of the Chomsky hierarchy as well as to some well-known complexity classes. In particular, state-alternating context-free grammars are compared to alternating context-free grammars (Theoret. Comput. Sci. 67 (1989) 75–85) and to alternating pushdown automata. Further, various derivation strategies are considered, and their influence on the expressive power of (state-) alternating context-free grammars is investigated
Incremental construction of LSTM recurrent neural network
Long Short--Term Memory (LSTM) is a recurrent neural network that
uses structures called memory blocks to allow the net remember
significant events distant in the past input sequence in order to
solve long time lag tasks, where other RNN approaches fail.
Throughout this work we have performed experiments using LSTM
networks extended with growing abilities, which we call GLSTM.
Four methods of training growing LSTM has been compared. These
methods include cascade and fully connected hidden layers as well
as two different levels of freezing previous weights in the
cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five
controllers of the Central Nervous System control has to be
modelled. We have compared growing LSTM results against other
neural networks approaches, and our work applying conventional
LSTM to the task at hand.Postprint (published version
- …