16 research outputs found
Submodular Welfare Maximization
An overview of different variants of the submodular welfare maximization
problem in combinatorial auctions. In particular, I studied the existing
algorithmic and game theoretic results for submodular welfare maximization
problem and its applications in other areas such as social networks
Usability of Humanly Computable Passwords
Reusing passwords across multiple websites is a common practice that
compromises security. Recently, Blum and Vempala have proposed password
strategies to help people calculate, in their heads, passwords for different
sites without dependence on third-party tools or external devices. Thus far,
the security and efficiency of these "mental algorithms" has been analyzed only
theoretically. But are such methods usable? We present the first usability
study of humanly computable password strategies, involving a learning phase (to
learn a password strategy), then a rehearsal phase (to login to a few
websites), and multiple follow-up tests. In our user study, with training,
participants were able to calculate a deterministic eight-character password
for an arbitrary new website in under 20 seconds
Collective Counterfactual Explanations via Optimal Transport
Counterfactual explanations provide individuals with cost-optimal actions
that can alter their labels to desired classes. However, if substantial
instances seek state modification, such individual-centric methods can lead to
new competitions and unanticipated costs. Furthermore, these recommendations,
disregarding the underlying data distribution, may suggest actions that users
perceive as outliers. To address these issues, our work proposes a collective
approach for formulating counterfactual explanations, with an emphasis on
utilizing the current density of the individuals to inform the recommended
actions. Our problem naturally casts as an optimal transport problem.
Leveraging the extensive literature on optimal transport, we illustrate how
this collective method improves upon the desiderata of classical counterfactual
explanations. We support our proposal with numerical simulations, illustrating
the effectiveness of the proposed approach and its relation to classic methods
Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness
Adversarial perturbation is used to expose vulnerabilities in machine
learning models, while the concept of individual fairness aims to ensure
equitable treatment regardless of sensitive attributes. Despite their initial
differences, both concepts rely on metrics to generate similar input data
instances. These metrics should be designed to align with the data's
characteristics, especially when it is derived from causal structure and should
reflect counterfactuals proximity. Previous attempts to define such metrics
often lack general assumptions about data or structural causal models. In this
research, we introduce a causal fair metric formulated based on causal
structures that encompass sensitive attributes. For robustness analysis, the
concept of protected causal perturbation is presented. Additionally, we delve
into metric learning, proposing a method for metric estimation and deployment
in real-world problems. The introduced metric has applications in the fields
adversarial training, fair learning, algorithmic recourse, and causal
reinforcement learning
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
As responsible AI gains importance in machine learning algorithms, properties
such as fairness, adversarial robustness, and causality have received
considerable attention in recent years. However, despite their individual
significance, there remains a critical gap in simultaneously exploring and
integrating these properties. In this paper, we propose a novel approach that
examines the relationship between individual fairness, adversarial robustness,
and structural causal models in heterogeneous data spaces, particularly when
dealing with discrete sensitive attributes. We use causal structural models and
sensitive attributes to create a fair metric and apply it to measure semantic
similarity among individuals. By introducing a novel causal adversarial
perturbation and applying adversarial training, we create a new regularizer
that combines individual fairness, causality, and robustness in the classifier.
Our method is evaluated on both real-world and synthetic datasets,
demonstrating its effectiveness in achieving an accurate classifier that
simultaneously exhibits fairness, adversarial robustness, and causal awareness