3,548 research outputs found
Opacity with Orwellian Observers and Intransitive Non-interference
Opacity is a general behavioural security scheme flexible enough to account
for several specific properties. Some secret set of behaviors of a system is
opaque if a passive attacker can never tell whether the observed behavior is a
secret one or not. Instead of considering the case of static observability
where the set of observable events is fixed off line or dynamic observability
where the set of observable events changes over time depending on the history
of the trace, we consider Orwellian partial observability where unobservable
events are not revealed unless a downgrading event occurs in the future of the
trace. We show how to verify that some regular secret is opaque for a regular
language L w.r.t. an Orwellian projection while it has been proved undecidable
even for a regular language L w.r.t. a general Orwellian observation function.
We finally illustrate relevancy of our results by proving the equivalence
between the opacity property of regular secrets w.r.t. Orwellian projection and
the intransitive non-interference property
Efficient Model Learning for Human-Robot Collaborative Tasks
We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot
Synthesis of Covert Actuator Attackers for Free
In this paper, we shall formulate and address a problem of covert actuator
attacker synthesis for cyber-physical systems that are modelled by
discrete-event systems. We assume the actuator attacker partially observes the
execution of the closed-loop system and is able to modify each control command
issued by the supervisor on a specified attackable subset of controllable
events. We provide straightforward but in general exponential-time reductions,
due to the use of subset construction procedure, from the covert actuator
attacker synthesis problems to the Ramadge-Wonham supervisor synthesis
problems. It then follows that it is possible to use the many techniques and
tools already developed for solving the supervisor synthesis problem to solve
the covert actuator attacker synthesis problem for free. In particular, we show
that, if the attacker cannot attack unobservable events to the supervisor, then
the reductions can be carried out in polynomial time. We also provide a brief
discussion on some other conditions under which the exponential blowup in state
size can be avoided. Finally, we show how the reduction based synthesis
procedure can be extended for the synthesis of successful covert actuator
attackers that also eavesdrop the control commands issued by the supervisor.Comment: The paper has been accepted for the journal Discrete Event Dynamic
System
- …