498 research outputs found

    HP-GAN: Probabilistic 3D human motion prediction via GAN

    Full text link
    Predicting and understanding human motion dynamics has many applications, such as motion synthesis, augmented reality, security, and autonomous vehicles. Due to the recent success of generative adversarial networks (GAN), there has been much interest in probabilistic estimation and synthetic data generation using deep neural network architectures and learning algorithms. We propose a novel sequence-to-sequence model for probabilistic human motion prediction, trained with a modified version of improved Wasserstein generative adversarial networks (WGAN-GP), in which we use a custom loss function designed for human motion prediction. Our model, which we call HP-GAN, learns a probability density function of future human poses conditioned on previous poses. It predicts multiple sequences of possible future human poses, each from the same input sequence but a different vector z drawn from a random distribution. Furthermore, to quantify the quality of the non-deterministic predictions, we simultaneously train a motion-quality-assessment model that learns the probability that a given skeleton sequence is a real human motion. We test our algorithm on two of the largest skeleton datasets: NTURGB-D and Human3.6M. We train our model on both single and multiple action types. Its predictive power for long-term motion estimation is demonstrated by generating multiple plausible futures of more than 30 frames from just 10 frames of input. We show that most sequences generated from the same input have more than 50\% probabilities of being judged as a real human sequence. We will release all the code used in this paper to Github

    A robust assessment for invariant representations

    Full text link
    The performance of machine learning models can be impacted by changes in data over time. A promising approach to address this challenge is invariant learning, with a particular focus on a method known as invariant risk minimization (IRM). This technique aims to identify a stable data representation that remains effective with out-of-distribution (OOD) data. While numerous studies have developed IRM-based methods adaptive to data augmentation scenarios, there has been limited attention on directly assessing how well these representations preserve their invariant performance under varying conditions. In our paper, we propose a novel method to evaluate invariant performance, specifically tailored for IRM-based methods. We establish a bridge between the conditional expectation of an invariant predictor across different environments through the likelihood ratio. Our proposed criterion offers a robust basis for evaluating invariant performance. We validate our approach with theoretical support and demonstrate its effectiveness through extensive numerical studies.These experiments illustrate how our method can assess the invariant performance of various representation techniques

    Conformal Inference for Invariant Risk Minimization

    Full text link
    The application of machine learning models can be significantly impeded by the occurrence of distributional shifts, as the assumption of homogeneity between the population of training and testing samples in machine learning and statistics may not be feasible in practical situations. One way to tackle this problem is to use invariant learning, such as invariant risk minimization (IRM), to acquire an invariant representation that aids in generalization with distributional shifts. This paper develops methods for obtaining distribution-free prediction regions to describe uncertainty estimates for invariant representations, accounting for the distribution shifts of data from different environments. Our approach involves a weighted conformity score that adapts to the specific environment in which the test sample is situated. We construct an adaptive conformal interval using the weighted conformity score and prove its conditional average under certain conditions. To demonstrate the effectiveness of our approach, we conduct several numerical experiments, including simulation studies and a practical example using real-world data.Comment: arXiv admin note: text overlap with arXiv:2209.1135

    Silicon substrate significantly alters dipole-dipole resolution in coherent microscope

    Get PDF
    Influences of a substrate below samples in imaging performances are studied by reaching the solution to the dyadic Green's function, where the substrate is modeled as half space in the sample region. Then, theoretical and numerical analysis are performed in terms of magnification, depth of field, and resolution. Various settings including positions of dipoles, the distance of the substrate to the focal plane and dipole polarization are considered. Methods to measure the resolution of zz-polarized dipoles are also presented since the modified Rayleigh limit cannot be applied directly. The silicon substrate and the glass substrate are studied with a water immersion objective lens. The high contrast between silicon and water leads to significant disturbances on imaging

    Hierarchical spacetime control

    Get PDF
    Specifying the motion of an animated linked figure such that it achieves given tasks (e.g., throwing a ball into a basket) and performs the tasks in a realistic fashion (e.g., gracefully, and following physical laws such as gravity) has been an elusive goal for computer animators. The spacetime constraints paradigm has been shown to be a valuable approach to this problem, but it suffers from computational complexity growth as creatures and tasks approach those one would like to animate. The complexity is shown to be, in part, due to the choice of finite basis with which to represent the trajectories of the generalized degrees of freedom. This paper describes new features to the spacetime constraints paradigm to address this problem.The functions through time of the generalized degrees of freedom are reformulated in a hierarchical wavelet representation. This provides a means to automatically add detailed motion only where it is required, thus minimizing the number of discrete variables. In addition the wavelet basis is shown to lead to better conditioned systems of equations and thus faster convergence.Engineering and Applied Science
    • …
    corecore