247 research outputs found
Continual Causal Effect Estimation: Challenges and Opportunities
A further understanding of cause and effect within observational data is
critical across many domains, such as economics, health care, public policy,
web mining, online advertising, and marketing campaigns. Although significant
advances have been made to overcome the challenges in causal effect estimation
with observational data, such as missing counterfactual outcomes and selection
bias between treatment and control groups, the existing methods mainly focus on
source-specific and stationary observational data. Such learning strategies
assume that all observational data are already available during the training
phase and from only one source. This practical concern of accessibility is
ubiquitous in various academic and industrial applications. That's what it
boiled down to: in the era of big data, we face new challenges in causal
inference with observational data, i.e., the extensibility for incrementally
available observational data, the adaptability for extra domain adaptation
problem except for the imbalance between treatment and control groups, and the
accessibility for an enormous amount of data. In this position paper, we
formally define the problem of continual treatment effect estimation, describe
its research challenges, and then present possible solutions to this problem.
Moreover, we will discuss future research directions on this topic.Comment: The 37th AAAI conference on artificial intelligence Continual
Causality Bridge Progra
Fair Attribute Completion on Graph with Missing Attributes
Tackling unfairness in graph learning models is a challenging task, as the
unfairness issues on graphs involve both attributes and topological structures.
Existing work on fair graph learning simply assumes that attributes of all
nodes are available for model training and then makes fair predictions. In
practice, however, the attributes of some nodes might not be accessible due to
missing data or privacy concerns, which makes fair graph learning even more
challenging. In this paper, we propose FairAC, a fair attribute completion
method, to complement missing information and learn fair node embeddings for
graphs with missing attributes. FairAC adopts an attention mechanism to deal
with the attribute missing problem and meanwhile, it mitigates two types of
unfairness, i.e., feature unfairness from attributes and topological unfairness
due to attribute completion. FairAC can work on various types of homogeneous
graphs and generate fair embeddings for them and thus can be applied to most
downstream tasks to improve their fairness performance. To our best knowledge,
FairAC is the first method that jointly addresses the graph attribution
completion and graph unfairness problems. Experimental results on benchmark
datasets show that our method achieves better fairness performance with less
sacrifice in accuracy, compared with the state-of-the-art methods of fair graph
learning. Code is available at: https://github.com/donglgcn/FairAC
How People Perceive The Dynamic Zero-COVID Policy: A Retrospective Analysis From The Perspective of Appraisal Theory
The Dynamic Zero-COVID Policy in China spanned three years and diverse
emotional responses have been observed at different times. In this paper, we
retrospectively analyzed public sentiments and perceptions of the policy,
especially regarding how they evolved over time, and how they related to
people's lived experiences. Through sentiment analysis of 2,358 collected Weibo
posts, we identified four representative points, i.e., policy initialization,
sharp sentiment change, lowest sentiment score, and policy termination, for an
in-depth discourse analysis through the lens of appraisal theory. In the end,
we reflected on the evolving public sentiments toward the Dynamic Zero-COVID
Policy and proposed implications for effective epidemic prevention and control
measures for future crises
Task-Driven Causal Feature Distillation: Towards Trustworthy Risk Prediction
Since artificial intelligence has seen tremendous recent successes in many
areas, it has sparked great interest in its potential for trustworthy and
interpretable risk prediction. However, most models lack causal reasoning and
struggle with class imbalance, leading to poor precision and recall. To address
this, we propose a Task-Driven Causal Feature Distillation model (TDCFD) to
transform original feature values into causal feature attributions for the
specific risk prediction task. The causal feature attribution helps describe
how much contribution the value of this feature can make to the risk prediction
result. After the causal feature distillation, a deep neural network is applied
to produce trustworthy prediction results with causal interpretability and high
precision/recall. We evaluate the performance of our TDCFD method on several
synthetic and real datasets, and the results demonstrate its superiority over
the state-of-the-art methods regarding precision, recall, interpretability, and
causality.Comment: Proceedings of the 2024 AAAI Conference on Artificial Intelligenc
- …