6,567 research outputs found

    Ultra high resolution stepper motors design, development, performance and application

    Get PDF
    The design and development of stepper motors with steps in the 10 arc sec to 2 arc min range is described. Some of the problem areas, e.g. rotor suspension, tribology aspects and environmental conditions are covered. A summary of achieved test results and the employment in different mechanisms already developed and tested is presented to give some examples of the possible use of this interesting device. Adaptations to military and commercial requirements are proposed and show the wide range of possible applications

    Photoacoustic detection of stimulated emission pumping in p-difluorobenzene

    Get PDF
    Photoacoustic detection has been used to monitor a stimulated emission pumping process in p‐difluorobenzene. Using the Ã^(1)B_(2u)5^1 state as an intermediate, several vibrational levels of the ground electronic state were populated. The photoacoustic method is an attractive alternative to other detection techniques because of its sensitivity, simplicity, and its ability to differentiate between stimulated emission pumping and excited state absorption. An example of excited state absorption in aniline is given

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    A map on the space of rational functions

    Full text link
    We describe dynamical properties of a map F\mathfrak{F} defined on the space of rational functions. The fixed points of F\mathfrak{F} are classified and the long time behavior of a subclass is described in terms of Eulerian polynomials

    Development during adolescence of the neural processing of social emotion

    Get PDF
    In this fMRI study, we investigated the development between adolescence and adulthood of the neural processing of social emotions. Unlike basic emotions (such as disgust and fear), social emotions (such as guilt and embarrassment) require the representation of another's mental states. Nineteen adolescents (10–18 years) and 10 adults (22–32 years) were scanned while thinking about scenarios featuring either social or basic emotions. In both age groups, the anterior rostral medial prefrontal cortex (MPFC) was activated during social versus basic emotion. However, adolescents activated a lateral part of the MPFC for social versus basic emotions, whereas adults did not. Relative to adolescents, adults showed higher activity in the left temporal pole for social versus basic emotions. These results show that, although the MPFC is activated during social emotion in both adults and adolescents, adolescents recruit anterior (MPFC) regions more than do adults, and adults recruit posterior (temporal) regions more than do adolescents

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    No full text
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    In the Wild Human Pose Estimation Using Explicit 2D Features and Intermediate 3D Representations

    No full text
    Convolutional Neural Network based approaches for monocular 3D human pose estimation usually require a large amount of training images with 3D pose annotations. While it is feasible to provide 2D joint annotations for large corpora of in-the-wild images with humans, providing accurate 3D annotations to such in-the-wild corpora is hardly feasible in practice. Most existing 3D labelled data sets are either synthetically created or feature in-studio images. 3D pose estimation algorithms trained on such data often have limited ability to generalize to real world scene diversity. We therefore propose a new deep learning based method for monocular 3D human pose estimation that shows high accuracy and generalizes better to in-the-wild scenes. It has a network architecture that comprises a new disentangled hidden space encoding of explicit 2D and 3D features, and uses supervision by a new learned projection model from predicted 3D pose. Our algorithm can be jointly trained on image data with 3D labels and image data with only 2D labels. It achieves state-of-the-art accuracy on challenging in-the-wild data

    Derivation of an integral of Boros and Moll via convolution of Student t-densities

    Full text link
    We show that the evaluation of an integral considered by Boros and Moll is a special case of a convolution result about Student t-densities obtained by the authors in 2008

    Learning to Dress {3D} People in Generative Clothing

    Get PDF
    Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at https://cape.is.tue.mpg.d
    • 

    corecore