300,584 research outputs found

    Unsupervised 3D Pose Estimation with Geometric Self-Supervision

    Full text link
    We present an unsupervised learning approach to recover 3D human pose from 2D skeletal joints extracted from a single image. Our method does not require any multi-view image data, 3D skeletons, correspondences between 2D-3D points, or use previously learned 3D priors during training. A lifting network accepts 2D landmarks as inputs and generates a corresponding 3D skeleton estimate. During training, the recovered 3D skeleton is reprojected on random camera viewpoints to generate new "synthetic" 2D poses. By lifting the synthetic 2D poses back to 3D and re-projecting them in the original camera view, we can define self-consistency loss both in 3D and in 2D. The training can thus be self supervised by exploiting the geometric self-consistency of the lift-reproject-lift process. We show that self-consistency alone is not sufficient to generate realistic skeletons, however adding a 2D pose discriminator enables the lifter to output valid 3D poses. Additionally, to learn from 2D poses "in the wild", we train an unsupervised 2D domain adapter network to allow for an expansion of 2D data. This improves results and demonstrates the usefulness of 2D pose data for unsupervised 3D lifting. Results on Human3.6M dataset for 3D human pose estimation demonstrate that our approach improves upon the previous unsupervised methods by 30% and outperforms many weakly supervised approaches that explicitly use 3D data

    A Theoretical Study of Inductive Biases in Contrastive Learning

    Full text link
    Understanding self-supervised learning is important but challenging. Previous theoretical works study the role of pretraining losses, and view neural networks as general black boxes. However, the recent work of Saunshi et al. argues that the model architecture -- a component largely ignored by previous works -- also has significant influences on the downstream performance of self-supervised learning. In this work, we provide the first theoretical analysis of self-supervised learning that incorporates the effect of inductive biases originating from the model class. In particular, we focus on contrastive learning -- a popular self-supervised learning method that is widely used in the vision domain. We show that when the model has limited capacity, contrastive representations would recover certain special clustering structures that are compatible with the model architecture, but ignore many other clustering structures in the data distribution. As a result, our theory can capture the more realistic setting where contrastive representations have much lower dimensionality than the number of clusters in the data distribution. We instantiate our theory on several synthetic data distributions, and provide empirical evidence to support the theory

    Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    Get PDF
    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise

    Block sparsity and gauge mediated weight sharing for learning dynamical laws from data

    Full text link
    Recent years have witnessed an increased interest in recovering dynamical laws of complex systems in a largely data-driven fashion under meaningful hypotheses. In this work, we propose a method for scalably learning dynamical laws of classical dynamical systems from data. As a novel ingredient, to achieve an efficient scaling with the system size, block sparse tensor trains - instances of tensor networks applied to function dictionaries - are used and the self similarity of the problem is exploited. For the latter, we propose an approach of gauge mediated weight sharing, inspired by notions of machine learning, which significantly improves performance over previous approaches. The practical performance of the method is demonstrated numerically on three one-dimensional systems - the Fermi-Pasta-Ulam-Tsingou system, rotating magnetic dipoles and classical particles interacting via modified Lennard-Jones potentials. We highlight the ability of the method to recover these systems, requiring 1400 samples to recover the 50 particle Fermi-Pasta-Ulam-Tsingou system to residuum of 5×10−75\times10^{-7}, 900 samples to recover the 50 particle magnetic dipole chain to residuum of 1.5×10−41.5\times10^{-4} and 7000 samples to recover the Lennard-Jones system of 10 particles to residuum 1.5×10−21.5\times10^{-2}. The robustness against additive Gaussian noise is demonstrated for the magnetic dipole system.Comment: 13 pages, 6 figure

    The Influence of Observational Learning on Self-reported Physical Activity, Self-efficacy for Physical Activity, and Health-related Fitness Knowledge for Physical Activity

    Get PDF
    The obesity epidemic has caused tremendous burden to our economy and healthcare system. Physical activity is one method that can reduce the obesity rate. However, physical activity declines in high school and does not recover. The likelihood of adolescents continuing their involvement in physical activity depends on how they navigate the highs and lows of their physical activity experiences (Feltz & Magyar, 2006). The purpose of this study is to look at the role of observational learning in physical activity and behaviors in an adolescent population. Specifically, this research examines the influence of observational learning on self-reported physical activity, self-efficacy for physical activity, and health-related fitness knowledge, controlling for gender, ethnicity, and grade
    • …
    corecore