5,512 research outputs found

    Total variation denoising in l1l^1 anisotropy

    Full text link
    We aim at constructing solutions to the minimizing problem for the variant of Rudin-Osher-Fatemi denoising model with rectilinear anisotropy and to the gradient flow of its underlying anisotropic total variation functional. We consider a naturally defined class of functions piecewise constant on rectangles (PCR). This class forms a strictly dense subset of the space of functions of bounded variation with an anisotropic norm. The main result shows that if the given noisy image is a PCR function, then solutions to both considered problems also have this property. For PCR data the problem of finding the solution is reduced to a finite algorithm. We discuss some implications of this result, for instance we use it to prove that continuity is preserved by both considered problems.Comment: 34 pages, 9 figure

    Uplifting Leadership: How Organizations, Teams and Communities Raise Performance, by Andy Hargreaves, Alan Boyle, & Alma Harris

    Get PDF
    UPLIFTING LEADERSHIP: HOW ORGANIzATIONS, TEAMS, AND COMMUNITIES RAISE PERFORMANCE. By Andy Hargreaves, Alan Boyle, & Alma Harris. San Francisco, CA: Jossey-Bass (2014). Hardcover, 240 pages. The purpose of the book, as stated in the introduction, is to explain and demonstrate the concept of “uplift” that encapsulates the authors’ research conducted in “fifteen organizations and systems in business, sports, and public education from 2007 to 2012” (p. 2) that experienced success in spite of disadvantages and challenges. Each of the six chapters enlists the experts in the particular field addressed in the chapter to assist in the communication of these major ideas, all of which have practical implications for leadership

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    Notes per a una valoracio del lĂšxic de Ramon Llull

    Get PDF
    Abstract not availabl

    El Dr. Joaquim Carreras i Artau

    Get PDF
    Abstract not availabl

    Cartes de tema lul·lià d'En Mateu Obrador a MossÚn Alcover

    Get PDF
    Abstract not availabl

    Multi-Garment Net: {L}earning to Dress {3D} People from Images

    No full text
    We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL model from a few frames (1-8) of a video. Several experiments demonstrate that this representation allows higher level of control when compared to single mesh or voxel representations of shape. Our model allows to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. To train MGN, we leverage a digital wardrobe containing 712 digital garments in correspondence, obtained with a novel method to register a set of clothing templates to a dataset of real 3D scans of people in different clothing and poses. Garments from the digital wardrobe, or predicted by MGN, can be used to dress any body shape in arbitrary poses. We will make publicly available the digital wardrobe, the MGN model, and code to dress SMPL with the garments

    {LoopReg}: {S}elf-supervised Learning of Implicit Surface Correspondences, Pose and Shape for {3D} Human Mesh Registration

    Get PDF
    We address the problem of fitting 3D human models to 3D scans of dressed humans. Classical methods optimize both the data-to-model correspondences and the human model parameters (pose and shape), but are reliable only when initialized close to the solution. Some methods initialize the optimization based on fully supervised correspondence predictors, which is not differentiable end-to-end, and can only process a single scan at a time. Our main contribution is LoopReg, an end-to-end learning framework to register a corpus of scans to a common 3D human model. The key idea is to create a self-supervised loop. A backward map, parameterized by a Neural Network, predicts the correspondence from every scan point to the surface of the human model. A forward map, parameterized by a human model, transforms the corresponding points back to the scan based on the model parameters (pose and shape), thus closing the loop. Formulating this closed loop is not straightforward because it is not trivial to force the output of the NN to be on the surface of the human model - outside this surface the human model is not even defined. To this end, we propose two key innovations. First, we define the canonical surface implicitly as the zero level set of a distance field in R3, which in contrast to morecommon UV parameterizations, does not require cutting the surface, does not have discontinuities, and does not induce distortion. Second, we diffuse the human model to the 3D domain R3. This allows to map the NN predictions forward,even when they slightly deviate from the zero level set. Results demonstrate that we can train LoopRegmainly self-supervised - following a supervised warm-start, the model becomes increasingly more accurate as additional unlabelled raw scans are processed. Our code and pre-trained models can be downloaded for research

    {TOCH}: {S}patio-Temporal Object Correspondence to Hand for Motion Refinement

    Get PDF
    We present TOCH, a method for refining incorrect 3D hand-object interaction sequences using a data prior. Existing hand trackers, especially those that rely on very few cameras, often produce visually unrealistic results with hand-object intersection or missing contacts. Although correcting such errors requires reasoning about temporal aspects of interaction, most previous work focus on static grasps and contacts. The core of our method are TOCH fields, a novel spatio-temporal representation for modeling correspondences between hands and objects during interaction. The key component is a point-wise object-centric representation which encodes the hand position relative to the object. Leveraging this novel representation, we learn a latent manifold of plausible TOCH fields with a temporal denoising auto-encoder. Experiments demonstrate that TOCH outperforms state-of-the-art (SOTA) 3D hand-object interaction models, which are limited to static grasps and contacts. More importantly, our method produces smooth interactions even before and after contact. Using a single trained TOCH model, we quantitatively and qualitatively demonstrate its usefulness for 1) correcting erroneous reconstruction results from off-the-shelf RGB/RGB-D hand-object reconstruction methods, 2) de-noising, and 3) grasp transfer across objects. We will release our code and trained model on our project page at http://virtualhumans.mpi-inf.mpg.de/toch
    • 

    corecore