Neuroprostheses show potential in restoring lost sensory function and
enhancing human capabilities, but the sensations produced by current devices
often seem unnatural or distorted. Exact placement of implants and differences
in individual perception lead to significant variations in stimulus response,
making personalized stimulus optimization a key challenge. Bayesian
optimization could be used to optimize patient-specific stimulation parameters
with limited noisy observations, but is not feasible for high-dimensional
stimuli. Alternatively, deep learning models can optimize stimulus encoding
strategies, but typically assume perfect knowledge of patient-specific
variations. Here we propose a novel, practically feasible approach that
overcomes both of these fundamental limitations. First, a deep encoder network
is trained to produce optimal stimuli for any individual patient by inverting a
forward model mapping electrical stimuli to visual percepts. Second, a
preferential Bayesian optimization strategy utilizes this encoder to optimize
patient-specific parameters for a new patient, using a minimal number of
pairwise comparisons between candidate stimuli. We demonstrate the viability of
this approach on a novel, state-of-the-art visual prosthesis model. We show
that our approach quickly learns a personalized stimulus encoder, leads to
dramatic improvements in the quality of restored vision, and is robust to noisy
patient feedback and misspecifications in the underlying forward model.
Overall, our results suggest that combining the strengths of deep learning and
Bayesian optimization could significantly improve the perceptual experience of
patients fitted with visual prostheses and may prove a viable solution for a
range of neuroprosthetic technologies