9 research outputs found

    Neural Score Matching for High-Dimensional Causal Inference

    Get PDF
    Traditional methods for matching in causal inference are impractical for high-dimensional datasets. They suffer from the curse of dimensionality: exact matching and coarsened exact matching find exponentially fewer matches as the input dimension grows, and propensity score matching may match highly unrelated units together. To overcome this problem, we develop theoretical results which motivate the use of neural networks to obtain non-trivial, multivariate balancing scores of a chosen level of coarseness, in contrast to the classical, scalar propensity score. We leverage these balancing scores to perform matching for high-dimensional causal inference and call this procedure neural score matching. We show that our method is competitive against other matching approaches on semi-synthetic high-dimensional datasets, both in terms of treatment effect estimation and reducing imbalanc

    PWSHAP: A Path-Wise Explanation Model for Targeted Variables

    Get PDF
    Predictive black-box models can exhibit high-accuracy but their opaque nature hinders their uptake in safety-critical deployment environments. Explanation methods (XAI) can provide confidence for decision-making through increased transparency. However, existing XAI methods are not tailored towards models in sensitive domains where one predictor is of special interest, such as a treatment effect in a clinical model, or ethnicity in policy models. We introduce Path-Wise Shapley effects (PWSHAP), a framework for assessing the targeted effect of a binary (e.g. treatment) variable from a complex outcome model. Our approach augments the predictive model with a user-defined directed acyclic graph (DAG). The method then uses the graph alongside on-manifold Shapley values to identify effects along causal pathways whilst maintaining robustness to adversarial attacks. We establish error bounds for the identified path-wise Shapley effects and for Shapley values. We show PWSHAP can perform local bias and mediation analyses with faithfulness to the model. Further, if the targeted variable is randomised we can quantify local effect modification. We demonstrate the resolution, interpretability and true locality of our approach on examples and a real-world experiment

    PWSHAP: a path-wise explanation model for targeted variables

    Get PDF
    Predictive black-box models can exhibit high-accuracy but their opaque nature hinders their uptake in safety-critical deployment environments. Explanation methods (XAI) can provide confidence for decision-making through increased transparency. However, existing XAI methods are not tailored towards models in sensitive domains where one predictor is of special interest, such as a treatment effect in a clinical model, or ethnicity in policy models. We introduce Path-Wise Shapley effects (PWSHAP), a framework for assessing the targeted effect of a binary (e.g. treatment) variable from a complex outcome model. Our approach augments the predictive model with a user-defined directed acyclic graph (DAG). The method then uses the graph alongside on-manifold Shapley values to identify effects along causal pathways whilst maintaining robustness to adversarial attacks. We establish error bounds for the identified path-wise Shapley effects and for Shapley values. We show PWSHAP can perform local bias and mediation analyses with faithfulness to the model. Further, if the targeted variable is randomised we can quantify local effect modification. We demonstrate the resolution, interpretability and true locality of our approach on examples and a real-world experiment

    scverse/scvi-tools: scvi-tools 1.1.0-rc.1

    No full text
    <p>See the <a href="https://docs.scvi-tools.org/en/stable/release_notes/index.html">release notes</a> for all changes.</p> <p>This release is available via PyPi:</p> <pre><code>pip install scvi-tools </code></pre> <p>Conda availability will follow (< 2 days typically)</p> <p>Please report any issues on <a href="https://github.com/scverse/scvi-tools">GitHub</a>.</p&gt

    scverse/scvi-tools: scvi-tools 1.1.0-rc.2

    No full text
    <p>See the <a href="https://docs.scvi-tools.org/en/stable/release_notes/index.html">release notes</a> for all changes.</p> <p>This release is available via PyPi:</p> <pre><code>pip install scvi-tools </code></pre> <p>Conda availability will follow (< 2 days typically)</p> <p>Please report any issues on <a href="https://github.com/scverse/scvi-tools">GitHub</a>.</p&gt

    scverse/scvi-tools: scvi-tools 1.1.0-rc.2

    No full text
    <p>See the <a href="https://docs.scvi-tools.org/en/stable/release_notes/index.html">release notes</a> for all changes.</p> <p>This release is available via PyPi:</p> <pre><code>pip install scvi-tools </code></pre> <p>Conda availability will follow (< 2 days typically)</p> <p>Please report any issues on <a href="https://github.com/scverse/scvi-tools">GitHub</a>.</p&gt

    scverse/scvi-tools: scvi-tools 1.1.0-rc.2

    No full text
    <p>See the <a href="https://docs.scvi-tools.org/en/stable/release_notes/index.html">release notes</a> for all changes.</p> <p>This release is available via PyPi:</p> <pre><code>pip install scvi-tools </code></pre> <p>Conda availability will follow (< 2 days typically)</p> <p>Please report any issues on <a href="https://github.com/scverse/scvi-tools">GitHub</a>.</p&gt
    corecore