Inferring the effect of interventions within complex systems is a fundamental
problem of statistics. A widely studied approach employs structural causal
models that postulate noisy functional relations among a set of interacting
variables. The underlying causal structure is then naturally represented by a
directed graph whose edges indicate direct causal dependencies. In a recent
line of work, additional assumptions on the causal models have been shown to
render this causal graph identifiable from observational data alone. One
example is the assumption of linear causal relations with equal error variances
that we will take up in this work. When the graph structure is known, classical
methods may be used for calculating estimates and confidence intervals for
causal effects. However, in many applications, expert knowledge that provides
an a priori valid causal structure is not available. Lacking alternatives, a
commonly used two-step approach first learns a graph and then treats the graph
as known in inference. This, however, yields confidence intervals that are
overly optimistic and fail to account for the data-driven model choice. We
argue that to draw reliable conclusions, it is necessary to incorporate the
remaining uncertainty about the underlying causal structure in confidence
statements about causal effects. To address this issue, we present a framework
based on test inversion that allows us to give confidence regions for total
causal effects that capture both sources of uncertainty: causal structure and
numerical size of nonzero effects