3 research outputs found
SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer
Existing data augmentation in self-supervised learning, while diverse, fails
to preserve the inherent structure of natural images. This results in distorted
augmented samples with compromised semantic information, ultimately impacting
downstream performance. To overcome this, we propose SASSL: Style Augmentations
for Self Supervised Learning, a novel augmentation technique based on Neural
Style Transfer. SASSL decouples semantic and stylistic attributes in images and
applies transformations exclusively to the style while preserving content,
generating diverse samples that better retain semantics. Our technique boosts
top-1 classification accuracy on ImageNet by up to 2 compared to
established self-supervised methods like MoCo, SimCLR, and BYOL, while
achieving superior transfer learning performance across various datasets
Augmentations vs Algorithms: What Works in Self-Supervised Learning
We study the relative effects of data augmentations, pretraining algorithms,
and model architectures in Self-Supervised Learning (SSL). While the recent
literature in this space leaves the impression that the pretraining algorithm
is of critical importance to performance, understanding its effect is
complicated by the difficulty in making objective and direct comparisons
between methods. We propose a new framework which unifies many seemingly
disparate SSL methods into a single shared template. Using this framework, we
identify aspects in which methods differ and observe that in addition to
changing the pretraining algorithm, many works also use new data augmentations
or more powerful model architectures. We compare several popular SSL methods
using our framework and find that many algorithmic additions, such as
prediction networks or new losses, have a minor impact on downstream task
performance (often less than ), while enhanced augmentation techniques
offer more significant performance improvements (). Our findings
challenge the premise that SSL is being driven primarily by algorithmic
improvements, and suggest instead a bitter lesson for SSL: that augmentation
diversity and data / model scale are more critical contributors to recent
advances in self-supervised learning.Comment: 18 pages, 1 figur
Quantitative modeling of forces in electromagnetic tweezers
This paper discusses numerical simulations of the magnetic field produced by an electromagnet for generation of forces on superparamagnetic microspheres used in manipulation of single molecules or cells. Single molecule force spectroscopy based on magnetic tweezers can be used in applications that require parallel readout of biopolymer stretching or biomolecular binding. The magnetic tweezers exert forces on the surface-immobilized macromolecule by pulling a magnetic bead attached to the free end of the molecule in the direction of the field gradient. In a typical force spectroscopy experiment, the pulling forces can range between subpiconewton to tens of piconewtons. In order to effectively provide such forces, an understanding of the source of the magnetic field is required as the first step in the design of force spectroscopy systems. In this study, we use a numerical technique, the method of auxiliary sources, to investigate the influence of electromagnet geometry and material parameters of the magnetic core on the magnetic forces pulling the target beads in the area of interest. The close proximity of the area of interest to the magnet body results in deviations from intuitive relations between magnet size and pulling force, as well as in the force decay with distance. We discuss the benefits and drawbacks of various geometric modifications affecting the magnitude and spatial distribution of forces achievable with an electromagnet