Explanations and Transparency in Collaborative Workflows

Abstract

International audienceWe pursue an investigation of data-driven collaborative work-flows. In the model, peers can access and update local data, causing side-e↵ects on other peers' data. In this paper, we study means of explaining to a peer her local view of a global run, both at runtime and statically. We consider the notion of " scenario for a given peer " that is a subrun observationally equivalent to the original run for that peer. Because such a scenario can sometimes di↵er significantly from what happens in the actual run, thus providing a misleading explanation , we introduce and study a faithfulness requirement that ensures closer adherence to the global run. We show that there is a unique minimal faithful scenario, that explains what is happening in the global run by extracting only the portion relevant to the peer. With regard to static explanations, we consider the problem of synthesizing, for each peer, a " view program " whose runs generate exactly the peer's observations of the global runs. Assuming some conditions desirable in their own right, namely transparency and boundedness, we show that such a view program exists and can be synthesized. As an added benefit, the view program rules provide provenance information for the updates observed by the peer.

    Similar works