2 research outputs found
Counterfactual Explanation Policies in RL
As Reinforcement Learning (RL) agents are increasingly employed in diverse
decision-making problems using reward preferences, it becomes important to
ensure that policies learned by these frameworks in mapping observations to a
probability distribution of the possible actions are explainable. However,
there is little to no work in the systematic understanding of these complex
policies in a contrastive manner, i.e., what minimal changes to the policy
would improve/worsen its performance to a desired level. In this work, we
present COUNTERPOL, the first framework to analyze RL policies using
counterfactual explanations in the form of minimal changes to the policy that
lead to the desired outcome. We do so by incorporating counterfactuals in
supervised learning in RL with the target outcome regulated using desired
return. We establish a theoretical connection between Counterpol and widely
used trust region-based policy optimization methods in RL. Extensive empirical
analysis shows the efficacy of COUNTERPOL in generating explanations for
(un)learning skills while keeping close to the original policy. Our results on
five different RL environments with diverse state and action spaces demonstrate
the utility of counterfactual explanations, paving the way for new frontiers in
designing and developing counterfactual policies.Comment: ICML Workshop on Counterfactuals in Minds and Machines, 202
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization
LLM-powered chatbots are becoming widely adopted in applications such as
healthcare, personal assistants, industry hiring decisions, etc. In many of
these cases, chatbots are fed sensitive, personal information in their prompts,
as samples for in-context learning, retrieved records from a database, or as
part of the conversation. The information provided in the prompt could directly
appear in the output, which might have privacy ramifications if there is
sensitive information there. As such, in this paper, we aim to understand the
input copying and regurgitation capabilities of these models during inference
and how they can be directly instructed to limit this copying by complying with
regulations such as HIPAA and GDPR, based on their internal knowledge of them.
More specifically, we find that when ChatGPT is prompted to summarize cover
letters of a 100 candidates, it would retain personally identifiable
information (PII) verbatim in 57.4% of cases, and we find this retention to be
non-uniform between different subgroups of people, based on attributes such as
gender identity. We then probe ChatGPT's perception of privacy-related policies
and privatization mechanisms by directly instructing it to provide compliant
outputs and observe a significant omission of PII from output.Comment: 12 pages, 9 figures, and 4 table