2 research outputs found
Preliminary Results from a Peer-Led, Social Network Intervention, Augmented by Artificial Intelligence to Prevent HIV among Youth Experiencing Homelessness
Each year, there are nearly 4 million youth experiencing homelessness (YEH)
in the United States with HIV prevalence ranging from 3 to 11.5%. Peer change
agent (PCA) models for HIV prevention have been used successfully in many
populations, but there have been notable failures. In recent years, network
interventionists have suggested that these failures could be attributed to PCA
selection procedures. The change agents themselves who are selected to do the
PCA work can often be as important as the messages they convey. To address this
concern, we tested a new PCA intervention for YEH, with three arms: (1) an arm
using an artificial intelligence (AI) planning algorithm to select PCA, (2) a
popularity arm--the standard PCA approach--operationalized as highest degree
centrality (DC), and (3) an observation only comparison group (OBS). PCA models
that promote HIV testing, HIV knowledge, and condom use are efficacious for
YEH. Both the AI and DC arms showed improvements over time. AI-based PCA
selection led to better outcomes and increased the speed of intervention
effects. Specifically, the changes in behavior observed in the AI arm occurred
by 1 month, but not until 3 months in the DC arm. Given the transient nature of
YEH and the high risk for HIV infection, more rapid intervention effects are
desirable
Contingency-Aware Influence Maximization: A Reinforcement Learning Approach
The influence maximization (IM) problem aims at finding a subset of seed
nodes in a social network that maximize the spread of influence. In this study,
we focus on a sub-class of IM problems, where whether the nodes are willing to
be the seeds when being invited is uncertain, called contingency-aware IM. Such
contingency aware IM is critical for applications for non-profit organizations
in low resource communities (e.g., spreading awareness of disease prevention).
Despite the initial success, a major practical obstacle in promoting the
solutions to more communities is the tremendous runtime of the greedy
algorithms and the lack of high performance computing (HPC) for the non-profits
in the field -- whenever there is a new social network, the non-profits usually
do not have the HPCs to recalculate the solutions. Motivated by this and
inspired by the line of works that use reinforcement learning (RL) to address
combinatorial optimization on graphs, we formalize the problem as a Markov
Decision Process (MDP), and use RL to learn an IM policy over historically seen
networks, and generalize to unseen networks with negligible runtime at test
phase. To fully exploit the properties of our targeted problem, we propose two
technical innovations that improve the existing methods, including
state-abstraction and theoretically grounded reward shaping. Empirical results
show that our method achieves influence as high as the state-of-the-art methods
for contingency-aware IM, while having negligible runtime at test phase.Comment: 11 pages; accepted for publication at UAI 202