87 research outputs found
Fuzzy multiset finite automata with output
Fuzzy multiset finite automata with output represent fuzzy version of finite automata (with output) working over multisets. This paper introduces Mealy-like, Moore-like, and compact fuzzy multiset finite automata. Their mutual transformations are described to prove their equivalent behaviours. Furthermore, various variants of reduced fuzzy multiset finite automata are studied where the reductions are directed to decrease the number of fuzzy components (like fuzzy initial distribution, fuzzy transition relation, or fuzzy output relation) of the fuzzy automata. The research confirmed that all fuzzy multiset finite automata with output can be reduced without change of their behaviours
Probabilistic Black-Box Checking via Active MDP Learning
We introduce a novel methodology for testing stochastic black-box systems,
frequently encountered in embedded systems. Our approach enhances the
established black-box checking (BBC) technique to address stochastic behavior.
Traditional BBC primarily involves iteratively identifying an input that
breaches the system's specifications by executing the following three phases:
the learning phase to construct an automaton approximating the black box's
behavior, the synthesis phase to identify a candidate counterexample from the
learned automaton, and the validation phase to validate the obtained candidate
counterexample and the learned automaton against the original black-box system.
Our method, ProbBBC, refines the conventional BBC approach by (1) employing an
active Markov Decision Process (MDP) learning method during the learning phase,
(2) incorporating probabilistic model checking in the synthesis phase, and (3)
applying statistical hypothesis testing in the validation phase. ProbBBC
uniquely integrates these techniques rather than merely substituting each
method in the traditional BBC; for instance, the statistical hypothesis testing
and the MDP learning procedure exchange information regarding the black-box
system's observation with one another. The experiment results suggest that
ProbBBC outperforms an existing method, especially for systems with limited
observation.Comment: Accepted to EMSOFT 202
Learning Probabilistic Finite State Automata For Opponent Modelling
Artificial Intelligence (AI) is the branch of the Computer Science field that
tries to imbue intelligent behaviour in software systems. In the early years of
the field, those systems were limited to big computing units where researchers
built expert systems that exhibited some kind of intelligence. But with the
advent of different kinds of networks, which the more prominent of those is
the Internet, the field became interested in Distributed Artificial Intelligence
(DAI) as the normal move.
The field thus moved from monolithic software architectures for its AI sys-
tems to architectures where several pieces of software were trying to solve a
problem or had interests on their own. Those pieces of software were called
Agents and the architectures that allowed the interoperation of multiple agents
were called Multi-Agent Systems (MAS). The agents act as a metaphor that
tries to describe those software systems that are embodied in a given environ-
ment and that behave or react intelligently to events in the environment.
The AI mainstream was initially interested in systems that could be taught
to behave depending on the inputs perceived. However this rapidly showed
ineffective because the human or the expert acted as the knowledge bottleneck
for distilling useful and efficient rules. This was in best cases, in worst cases
the task of enumerating the rules was difficult or plainly not affordable. This
sparked the interest of another subfield, Machine Learning and its counter part
in a MAS, Distributed Machine Learning. If you can not code all the scenario
combinations, code within the agent the rules that allows it to learn from the
environment and the actions performed.
With this framework in mind, applications are endless. Agents can be used
to trade bonds or other financial derivatives without human intervention, or
they can be embedded in a robotics hardware and learn unseen map config-
uration in distant locations like distant planets. Agents are not restricted to
interactions with humans or the environment, they can also interact with other
agents themselves. For instance, agents can negotiate the quality of service of a channel before establishing a communication or they can share information
about the environment in a cooperative setting like robot soccer players.
But there are some shortcomings that emerge in a MAS architecture. The
one related to this thesis is that partitioning the task at hand into agents
usually entails that agents have less memory or computing power. It is not
economically feasible to replicate the big computing unit on each separate
agent in our system. Thus we can say that we should think about our agents as
computationally bounded , that is, they have a limited amount of computing
power to learn from the environment. This has serious implications on the
algorithms that are commonly used for learning in these settings.
The classical approach for learning in MAS system is to use some variation
of a Reinforcement Learning (RL) algorithm [BT96, SB98]. The main idea
around those algorithms is that the agent has to maintain a table with the per-
ceived value of each action/state pair and through multiple iterations obtain a
set of decision rules that allows to take the best action for a given environment.
This approach has several flaws when the current action depends on a single
observation seen in the past (for instance, a warning sign that a robot per-
ceives). Several techniques has been proposed to alleviate those shortcomings.
For instance to avoid the combinatorial explosion of states and actions, instead
of storing a table with the value of the pairs an approximating function like a
neural network can be used instead. And for events in the past, we can extend
the state definition of the environment creating dummy states that correspond
to the N-tuple (stateN, stateN−1, . . . , stateN−t
Recommended from our members
A model learning based testing approach for kernel P systems
YesKernel P systems have been introduced as a unifying formalism allowing to specify, simulate and analyse various problems. Several applications of this
model have been considered and a powerful tool built in order to support their development and analysis. Testing represents an important aspect of any system analysis and correctness. In this paper we introduce for the first time a bounded test generation approach for kernel P systems by considering bounded input sequences. A learning algorithm for kernel P systems is based on learning X-machine models that are equivalent to these systems for sequences of steps up to a certain limit, â„“. The Lâ„“ learning algorithm is used.
The testing approach is then devised from the inferred X-machines. The method is applied to a case study illustrating the key parts of the approach.This research was supported by the European Regional Development Fund, Competitiveness Operational Program 2014-2020 through project IDBC (code SMIS 2014+: 121512). Raluca Lefticaru, Savas Konur and Marian Gheorghe have been partially supported by the Royal Society grant IES╲R3╲213176, 2022-2024. The work of Savas Konur is also supported by EPSRC (EP/R043787/1)
06361 Abstracts Collection -- Computing Media Languages for Space-Oriented Computation
From 03.09.06 to 08.09.06, the Dagstuhl Seminar 06361 ``Computing Media and Languages for Space-Oriented Computation\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Learning Probabilistic Finite State Automata For Opponent Modelling
Artificial Intelligence (AI) is the branch of the Computer Science field that
tries to imbue intelligent behaviour in software systems. In the early years of
the field, those systems were limited to big computing units where researchers
built expert systems that exhibited some kind of intelligence. But with the
advent of different kinds of networks, which the more prominent of those is
the Internet, the field became interested in Distributed Artificial Intelligence
(DAI) as the normal move.
The field thus moved from monolithic software architectures for its AI sys-
tems to architectures where several pieces of software were trying to solve a
problem or had interests on their own. Those pieces of software were called
Agents and the architectures that allowed the interoperation of multiple agents
were called Multi-Agent Systems (MAS). The agents act as a metaphor that
tries to describe those software systems that are embodied in a given environ-
ment and that behave or react intelligently to events in the environment.
The AI mainstream was initially interested in systems that could be taught
to behave depending on the inputs perceived. However this rapidly showed
ineffective because the human or the expert acted as the knowledge bottleneck
for distilling useful and efficient rules. This was in best cases, in worst cases
the task of enumerating the rules was difficult or plainly not affordable. This
sparked the interest of another subfield, Machine Learning and its counter part
in a MAS, Distributed Machine Learning. If you can not code all the scenario
combinations, code within the agent the rules that allows it to learn from the
environment and the actions performed.
With this framework in mind, applications are endless. Agents can be used
to trade bonds or other financial derivatives without human intervention, or
they can be embedded in a robotics hardware and learn unseen map config-
uration in distant locations like distant planets. Agents are not restricted to
interactions with humans or the environment, they can also interact with other
agents themselves. For instance, agents can negotiate the quality of service of a channel before establishing a communication or they can share information
about the environment in a cooperative setting like robot soccer players.
But there are some shortcomings that emerge in a MAS architecture. The
one related to this thesis is that partitioning the task at hand into agents
usually entails that agents have less memory or computing power. It is not
economically feasible to replicate the big computing unit on each separate
agent in our system. Thus we can say that we should think about our agents as
computationally bounded , that is, they have a limited amount of computing
power to learn from the environment. This has serious implications on the
algorithms that are commonly used for learning in these settings.
The classical approach for learning in MAS system is to use some variation
of a Reinforcement Learning (RL) algorithm [BT96, SB98]. The main idea
around those algorithms is that the agent has to maintain a table with the per-
ceived value of each action/state pair and through multiple iterations obtain a
set of decision rules that allows to take the best action for a given environment.
This approach has several flaws when the current action depends on a single
observation seen in the past (for instance, a warning sign that a robot per-
ceives). Several techniques has been proposed to alleviate those shortcomings.
For instance to avoid the combinatorial explosion of states and actions, instead
of storing a table with the value of the pairs an approximating function like a
neural network can be used instead. And for events in the past, we can extend
the state definition of the environment creating dummy states that correspond
to the N-tuple (stateN, stateN−1, . . . , stateN−t
On security analysis of periodic systems: expressiveness and complexity
Development of automated technological systems has seen the increase in interconnectivity among its components. This includes Internet of Things (IoT) and Industry 4.0 (I4.0) and the underlying communication between sensors and controllers. This paper is a step toward a formal framework for specifying such systems and analyzing underlying properties including safety and security. We introduce automata systems (AS) motivated by I4.0 applications. We identify various subclasses of AS that reflect different types of requirements on I4.0. We investigate the complexity of the problem of functional correctness of these systems as well as their vulnerability to attacks. We model the presence of various levels of threats to the system by proposing a range of intruder models, based on the number of actions intruders can use
- …