1,216 research outputs found
FUNCK: Information Funnels and Bottlenecks for Invariant Representation Learning
Learning invariant representations that remain useful for a downstream task
is still a key challenge in machine learning. We investigate a set of related
information funnels and bottleneck problems that claim to learn invariant
representations from the data. We also propose a new element to this family of
information-theoretic objectives: The Conditional Privacy Funnel with Side
Information, which we investigate in fully and semi-supervised settings. Given
the generally intractable objectives, we derive tractable approximations using
amortized variational inference parameterized by neural networks and study the
intrinsic trade-offs of these objectives. We describe empirically the proposed
approach and show that with a few labels it is possible to learn fair
classifiers and generate useful representations approximately invariant to
unwanted sources of variation. Furthermore, we provide insights about the
applicability of these methods in real-world scenarios with ordinary tabular
datasets when the data is scarce.Comment: 28 page
Federated Fairness without Access to Sensitive Groups
Current approaches to group fairness in federated learning assume the
existence of predefined and labeled sensitive groups during training. However,
due to factors ranging from emerging regulations to dynamics and
location-dependency of protected groups, this assumption may be unsuitable in
many real-world scenarios. In this work, we propose a new approach to guarantee
group fairness that does not rely on any predefined definition of sensitive
groups or additional labels. Our objective allows the federation to learn a
Pareto efficient global model ensuring worst-case group fairness and it
enables, via a single hyper-parameter, trade-offs between fairness and utility,
subject only to a group size constraint. This implies that any sufficiently
large subset of the population is guaranteed to receive at least a minimum
level of utility performance from the model. The proposed objective encompasses
existing approaches as special cases, such as empirical risk minimization and
subgroup robustness objectives from centralized machine learning. We provide an
algorithm to solve this problem in federation that enjoys convergence and
excess risk guarantees. Our empirical results indicate that the proposed
approach can effectively improve the worst-performing group that may be present
without unnecessarily hurting the average performance, exhibits superior or
comparable performance to relevant baselines, and achieves a large set of
solutions with different fairness-utility trade-offs
The SPATIAL Architecture:Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in realworld industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight
FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation
Liver transplantation often faces fairness challenges across subgroups
defined by sensitive attributes like age group, gender, and race/ethnicity.
Machine learning models for outcome prediction can introduce additional biases.
To address these, we introduce Fairness through the Equitable Rate of
Improvement in Multitask Learning (FERI) algorithm for fair predictions of
graft failure risk in liver transplant patients. FERI constrains subgroup loss
by balancing learning rates and preventing subgroup dominance in the training
process. Our experiments show that FERI maintains high predictive accuracy with
AUROC and AUPRC comparable to baseline models. More importantly, FERI
demonstrates an ability to improve fairness without sacrificing accuracy.
Specifically, for gender, FERI reduces the demographic parity disparity by
71.74%, and for the age group, it decreases the equalized odds disparity by
40.46%. Therefore, the FERI algorithm advances fairness-aware predictive
modeling in healthcare and provides an invaluable tool for equitable healthcare
systems
- …