1 research outputs found
Minimizing the Information Leakage Regarding High-Level Task Specifications
We consider a scenario in which an autonomous agent carries out a mission in
a stochastic environment while passively observed by an adversary. For the
agent, minimizing the information leaked to the adversary regarding its
high-level specification is critical in creating an informational advantage. We
express the specification of the agent as a parametric linear temporal logic
formula, measure the information leakage by the adversary's confidence in the
agent's mission specification, and propose algorithms to synthesize a policy
for the agent which minimizes the information leakage to the adversary. In the
scenario considered, the adversary aims to infer the specification of the agent
from a set of candidate specifications, each of which has an associated
likelihood probability. The agent's objective is to synthesize a policy that
maximizes the entropy of the adversary's likelihood distribution while
satisfying its specification. We propose two approaches to solve the resulting
synthesis problem. The first approach computes the exact satisfaction
probabilities for each candidate specification, whereas the second approach
utilizes the Fr\'echet inequalities to approximate them. For each approach, we
formulate a mixed-integer program with a quasiconcave objective function. We
solve the problem using a bisection algorithm. Finally, we compare the
performance of both approaches on numerical simulations.Comment: 8 pages, 4 figures, 2 table