Unlearning algorithms aim to remove deleted data's influence from trained
models at a cost lower than full retraining. However, prior guarantees of
unlearning in literature are flawed and don't protect the privacy of deleted
records. We show that when users delete their data as a function of published
models, records in a database become interdependent. So, even retraining a
fresh model after deletion of a record doesn't ensure its privacy. Secondly,
unlearning algorithms that cache partial computations to speed up the
processing can leak deleted information over a series of releases, violating
the privacy of deleted records in the long run. To address these, we propose a
sound deletion guarantee and show that the privacy of existing records is
necessary for the privacy of deleted records. Under this notion, we propose an
accurate, computationally efficient, and secure machine unlearning algorithm
based on noisy gradient descent