4,163 research outputs found

    Functional requirements for the man-vehicle systems research facility

    Get PDF
    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included

    Status of NASA/Army rotorcraft research and development piloted flight simulation

    Get PDF
    The status of the major NASA/Army capabilities in piloted rotorcraft flight simulation is reviewed. The requirements for research and development piloted simulation are addressed as well as the capabilities and technologies that are currently available or are being developed by NASA and the Army at Ames. The application of revolutionary advances (in visual scene, electronic cockpits, motion, and modelling of interactive mission environments and/or vehicle systems) to the NASA/Army facilities are also addressed. Particular attention is devoted to the major advances made in integrating these individual capabilities into fully integrated simulation environment that were or are being applied to new rotorcraft mission requirements. The specific simulators discussed are the Vertical Motion Simulator and the Crew Station Research and Development Facility

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    PMG: online generation of high-quality molecular pictures and storyboarded animations

    Get PDF
    The Protein Movie Generator (PMG) is an online service able to generate high-quality pictures and animations for which one can then define simple storyboards. The PMG can therefore efficiently illustrate concepts such as molecular motion or formation/dissociation of complexes. Emphasis is put on the simplicity of animation generation. Rendering is achieved using Dino coupled to POV-Ray. In order to produce highly informative images, the PMG includes capabilities of using different molecular representations at the same time to highlight particular molecular features. Moreover, sophisticated rendering concepts including scene definition, as well as modeling light and materials are available. The PMG accepts Protein Data Bank (PDB) files as input, which may include series of models or molecular dynamics trajectories and produces images or movies under various formats. PMG can be accessed at http://bioserv.rpbs.jussieu.fr/PMG.html

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Inner Space Preserving Generative Pose Machine

    Full text link
    Image-based generative methods, such as generative adversarial networks (GANs) have already been able to generate realistic images with much context control, specially when they are conditioned. However, most successful frameworks share a common procedure which performs an image-to-image translation with pose of figures in the image untouched. When the objective is reposing a figure in an image while preserving the rest of the image, the state-of-the-art mainly assumes a single rigid body with simple background and limited pose shift, which can hardly be extended to the images under normal settings. In this paper, we introduce an image "inner space" preserving model that assigns an interpretable low-dimensional pose descriptor (LDPD) to an articulated figure in the image. Figure reposing is then generated by passing the LDPD and the original image through multi-stage augmented hourglass networks in a conditional GAN structure, called inner space preserving generative pose machine (ISP-GPM). We evaluated ISP-GPM on reposing human figures, which are highly articulated with versatile variations. Test of a state-of-the-art pose estimator on our reposed dataset gave an accuracy over 80% on PCK0.5 metric. The results also elucidated that our ISP-GPM is able to preserve the background with high accuracy while reasonably recovering the area blocked by the figure to be reposed.Comment: http://www.northeastern.edu/ostadabbas/2018/07/23/inner-space-preserving-generative-pose-machine
    corecore