1,616 research outputs found
Probabilistic Opacity in Refinement-Based Modeling
Given a probabilistic transition system (PTS) partially observed by
an attacker, and an -regular predicate over the traces of
, measuring the disclosure of the secret in means
computing the probability that an attacker who observes a run of can
ascertain that its trace belongs to . In the context of refinement, we
consider specifications given as Interval-valued Discrete Time Markov Chains
(IDTMCs), which are underspecified Markov chains where probabilities on edges
are only required to belong to intervals. Scheduling an IDTMC produces
a concrete implementation as a PTS and we define the worst case disclosure of
secret in as the maximal disclosure of over all
PTSs thus produced. We compute this value for a subclass of IDTMCs and we prove
that refinement can only improve the opacity of implementations
Uniform Strategies
We consider turn-based game arenas for which we investigate uniformity
properties of strategies. These properties involve bundles of plays, that arise
from some semantical motive. Typically, we can represent constraints on allowed
strategies, such as being observation-based. We propose a formal language to
specify uniformity properties and demonstrate its relevance by rephrasing
various known problems from the literature. Note that the ability to correlate
different plays cannot be achieved by any branching-time logic if not equipped
with an additional modality, so-called R in this contribution. We also study an
automated procedure to synthesize strategies subject to a uniformity property,
which strictly extends existing results based on, say standard temporal logics.
We exhibit a generic solution for the synthesis problem provided the bundles of
plays rely on any binary relation definable by a finite state transducer. This
solution yields a non-elementary procedure.Comment: (2012
Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed Automata
Timing information leakage occurs whenever an attacker successfully deduces
confidential internal information by observing some timed information such as
events with timestamps. Timed automata are an extension of finite-state
automata with a set of clocks evolving linearly and that can be tested or
reset, making this formalism able to reason on systems involving concurrency
and timing constraints. In this paper, we summarize a recent line of works
using timed automata as the input formalism, in which we assume that the
attacker has access (only) to the system execution time. First, we address the
following execution-time opacity problem: given a timed system modeled by a
timed automaton, given a secret location and a final location, synthesize the
execution times from the initial location to the final location for which one
cannot deduce whether the secret location was visited. This means that for any
such execution time, the system is opaque: either the final location is not
reachable, or it is reachable with that execution time for both a run visiting
and a run not visiting the secret location. We also address the full
execution-time opacity problem, asking whether the system is opaque for all
execution times; we also study a weak counterpart. Second, we add timing
parameters, which are a way to configure a system: we identify a subclass of
parametric timed automata with some decidability results. In addition, we
devise a semi-algorithm for synthesizing timing parameter valuations
guaranteeing that the resulting system is opaque. Third, we report on problems
when the secret has itself an expiration date, thus defining expiring
execution-time opacity problems. We finally show that our method can also apply
to program analysis with configurable internal timings.Comment: In Proceedings TiCSA 2023, arXiv:2310.18720. This invited paper
mainly summarizes results on opacity from two recent works published in ToSEM
(2022) and at ICECCS 2023, providing unified notations and concept names for
the sake of consistency. In addition, we prove a few original results absent
from these work
Parametric timed model checking for guaranteeing timed opacity
Information leakage can have dramatic consequences on systems security. Among
harmful information leaks, the timing information leakage is the ability for an
attacker to deduce internal information depending on the system execution time.
We address the following problem: given a timed system, synthesize the
execution times for which one cannot deduce whether the system performed some
secret behavior. We solve this problem in the setting of timed automata (TAs).
We first provide a general solution, and then extend the problem to parametric
TAs, by synthesizing internal timings making the TA secure. We study
decidability, devise algorithms, and show that our method can also apply to
program analysis.Comment: This is the author (and extended) version of the manuscript of the
same name published in the proceedings of ATVA 2019. This work is partially
supported by the ANR national research program PACS (ANR-14-CE28-0002), the
ANR-NRF research program (ProMiS) and by ERATO HASUO Metamathematics for
Systems Design Project (No. JPMJER1603), JS
- …