Machine unlearning, the ability for a machine learning model to forget, is
becoming increasingly important to comply with data privacy regulations, as
well as to remove harmful, manipulated, or outdated information. The key
challenge lies in forgetting specific information while protecting model
performance on the remaining data. While current state-of-the-art methods
perform well, they typically require some level of retraining over the retained
data, in order to protect or restore model performance. This adds computational
overhead and mandates that the training data remain available and accessible,
which may not be feasible. In contrast, other methods employ a retrain-free
paradigm, however, these approaches are prohibitively computationally expensive
and do not perform on par with their retrain-based counterparts. We present
Selective Synaptic Dampening (SSD), a novel two-step, post hoc, retrain-free
approach to machine unlearning which is fast, performant, and does not require
long-term storage of the training data. First, SSD uses the Fisher information
matrix of the training and forgetting data to select parameters that are
disproportionately important to the forget set. Second, SSD induces forgetting
by dampening these parameters proportional to their relative importance to the
forget set with respect to the wider training data. We evaluate our method
against several existing unlearning methods in a range of experiments using
ResNet18 and Vision Transformer. Results show that the performance of SSD is
competitive with retrain-based post hoc methods, demonstrating the viability of
retrain-free post hoc unlearning approaches