Existing data augmentation in self-supervised learning, while diverse, fails
to preserve the inherent structure of natural images. This results in distorted
augmented samples with compromised semantic information, ultimately impacting
downstream performance. To overcome this, we propose SASSL: Style Augmentations
for Self Supervised Learning, a novel augmentation technique based on Neural
Style Transfer. SASSL decouples semantic and stylistic attributes in images and
applies transformations exclusively to the style while preserving content,
generating diverse samples that better retain semantics. Our technique boosts
top-1 classification accuracy on ImageNet by up to 2% compared to
established self-supervised methods like MoCo, SimCLR, and BYOL, while
achieving superior transfer learning performance across various datasets