80 research outputs found

    Zero-shot Pose Transfer for Unrigged Stylized 3D Characters

    Full text link
    Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Furthermore, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at https://jiashunwang.github.io/ZPTComment: CVPR 202

    CASA: Category-agnostic Skeletal Animal Reconstruction

    Full text link
    Recovering the skeletal shape of an animal from a monocular video is a longstanding challenge. Prevailing animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown animal by associating it with a seen articulated character in their memory. Inspired by this fact, we present CASA, a novel Category-Agnostic Skeletal Animal reconstruction method consisting of two major components: a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first retrieves an articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained language-vision model. CASA then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of CASA regarding shape reconstruction and articulation. We further demonstrate that the resulting skeletal-animated characters can be used for re-animation.Comment: Accepted to NeurIPS 202

    Coping with Data Scarcity in Deep Learning and Applications for Social Good

    Get PDF
    The recent years are experiencing an extremely fast evolution of the Computer Vision and Machine Learning fields: several application domains benefit from the newly developed technologies and industries are investing a growing amount of money in Artificial Intelligence. Convolutional Neural Networks and Deep Learning substantially contributed to the rise and the diffusion of AI-based solutions, creating the potential for many disruptive new businesses. The effectiveness of Deep Learning models is grounded by the availability of a huge amount of training data. Unfortunately, data collection and labeling is an extremely expensive task in terms of both time and costs; moreover, it frequently requires the collaboration of domain experts. In the first part of the thesis, I will investigate some methods for reducing the cost of data acquisition for Deep Learning applications in the relatively constrained industrial scenarios related to visual inspection. I will primarily assess the effectiveness of Deep Neural Networks in comparison with several classical Machine Learning algorithms requiring a smaller amount of data to be trained. Hereafter, I will introduce a hardware-based data augmentation approach, which leads to a considerable performance boost taking advantage of a novel illumination setup designed for this purpose. Finally, I will investigate the situation in which acquiring a sufficient number of training samples is not possible, in particular the most extreme situation: zero-shot learning (ZSL), which is the problem of multi-class classification when no training data is available for some of the classes. Visual features designed for image classification and trained offline have been shown to be useful for ZSL to generalize towards classes not seen during training. Nevertheless, I will show that recognition performances on unseen classes can be sharply improved by learning ad hoc semantic embedding (the pre-defined list of present and absent attributes that represent a class) and visual features, to increase the correlation between the two geometrical spaces and ease the metric learning process for ZSL. In the second part of the thesis, I will present some successful applications of state-of-the- art Computer Vision, Data Analysis and Artificial Intelligence methods. I will illustrate some solutions developed during the 2020 Coronavirus Pandemic for controlling the disease vii evolution and for reducing virus spreading. I will describe the first publicly available dataset for the analysis of face-touching behavior that we annotated and distributed, and I will illustrate an extensive evaluation of several computer vision methods applied to the produced dataset. Moreover, I will describe the privacy-preserving solution we developed for estimating the \u201cSocial Distance\u201d and its violations, given a single uncalibrated image in unconstrained scenarios. I will conclude the thesis with a Computer Vision solution developed in collaboration with the Egyptian Museum of Turin for digitally unwrapping mummies analyzing their CT scan, to support the archaeologists during mummy analysis and avoiding the devastating and irreversible process of physically unwrapping the bandages for removing amulets and jewels from the body

    Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis

    Full text link
    Building general-purpose robots that can operate seamlessly, in any environment, with any object, and utilizing various skills to complete diverse tasks has been a long-standing goal in Artificial Intelligence. Unfortunately, however, most existing robotic systems have been constrained - having been designed for specific tasks, trained on specific datasets, and deployed within specific environments. These systems usually require extensively-labeled data, rely on task-specific models, have numerous generalization issues when deployed in real-world scenarios, and struggle to remain robust to distribution shifts. Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models (i.e., foundation models) in research fields such as Natural Language Processing (NLP) and Computer Vision (CV), we devote this survey to exploring (i) how these existing foundation models from NLP and CV can be applied to the field of robotics, and also exploring (ii) what a robotics-specific foundation model would look like. We begin by providing an overview of what constitutes a conventional robotic system and the fundamental barriers to making it universally applicable. Next, we establish a taxonomy to discuss current work exploring ways to leverage existing foundation models for robotics and develop ones catered to robotics. Finally, we discuss key challenges and promising future directions in using foundation models for enabling general-purpose robotic systems. We encourage readers to view our living GitHub repository of resources, including papers reviewed in this survey as well as related projects and repositories for developing foundation models for robotics

    Training Physics-based Controllers for Articulated Characters with Deep Reinforcement Learning

    Get PDF
    In this thesis, two different applications are discussed for using machine learning techniques to train coordinated motion controllers in arbitrary characters in absence of motion capture data. The methods highlight the resourcefulness of physical simulations to generate synthetic and generic motion data that can be used to learn various targeted skills. First, we present an unsupervised method for learning loco-motion skills in virtual characters from a low dimensional latent space which captures the coordination between multiple joints. We use a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitation, resulting in a corpus of motion data from which a manifold latent space can be extracted. Using reinforcement learning, we then train the character to learn locomotion (such as walking or running) in the low-dimensional latent space instead of the full-dimensional joint action space. The thesis also presents an end-to-end automated framework for training physics-based characters to rhythmically dance to user-input songs. A generative adversarial network (GAN) architecture is proposed that learns to generate physically stable dance moves through repeated interactions with the environment. These moves are then used to construct a dance network that can be used for choreography. Using DRL, the character is then trained to perform these moves, without losing balance and rhythm, in the presence of physical forces such as gravity and friction

    Problems in the study of brain evolution and cognition

    Get PDF

    Receding-horizon motion planning of quadrupedal robot locomotion

    Get PDF
    Quadrupedal robots are designed to offer efficient and robust mobility on uneven terrain. This thesis investigates combining numerical optimization and machine learning methods to achieve interpretable kinodynamic planning of natural and agile locomotion. The proposed algorithm, called Receding-Horizon Experience-Controlled Adaptive Legged Locomotion (RHECALL), uses nonlinear programming (NLP) with learned initialization to produce long-horizon, high-fidelity, terrain-aware, whole-body trajectories. RHECALL has been implemented and validated on the ANYbotics ANYmal B and C quadrupeds on complex terrain. The proposed optimal control problem formulation uses the single-rigid-body dynamics (SRBD) model and adopts a direct collocation transcription method which enables the discovery of aperiodic contact sequences. To generate reliable trajectories, we propose fast-to-compute analytical costs that leverage the discretization and terrain-dependent kinematic constraints. To extend the formulation to receding-horizon planning, we propose a segmentation approach with asynchronous centre of mass (COM) and end-effector timings and a heuristic initialization scheme which reuses the previous solution. We integrate real-time 2.5D perception data for online foothold selection. Additionally, we demonstrate that a learned stability criterion can be incorporated into the planning framework. To accelerate the convergence of the NLP solver to locally optimal solutions, we propose data-driven initialization schemes trained using supervised and unsupervised behaviour cloning. We demonstrate the computational advantage of the schemes and the ability to leverage latent space to reconstruct dynamic segments of plans which are several seconds long. Finally, in order to apply RHECALL to quadrupeds with significant leg inertias, we derive the more accurate lump leg single-rigid-body dynamics (LL-SRBD) and centroidal dynamics (CD) models and their first-order partial derivatives. To facilitate intuitive usage of costs, constraints and initializations, we parameterize these models by Euclidean-space variables. We show the models have the ability to shape rotational inertia of the robot which offers potential to further improve agility
    corecore