26,892 research outputs found

    The 1998 Annual Meeting of the Mid-Western Educational Research Association

    Get PDF
    Conference Highlights from the Program Chai

    Testimony of Chai R. Feldblum

    Get PDF
    Testimony of Chai R. Feldblum, for What An Aging Workforce Can Teach Us About Workplace Flexibility July 18, 2005

    First Annual Report to the Avi Chai Foundation on the Progress of Its Decision to Spend Down

    Get PDF
    First Annual Report to the AVI CHAI Foundation after making a decision to Spend Down

    Off-vertical rotation - A convenient precise means of exposing the passive human subject to a rotating linear acceleration vector

    Get PDF
    Disturbances of vestibular origin comprising motion sickness resulting from rotating tilted chai

    Professional Responsibilities and Procedures Committee Minutes, September 17, 2007

    Get PDF
    Approval of Minutes Representation of Extension and RCDE on Faculty Senate Academic Freedom and Professional Responsibility 403.1 and 403.3.1 Reasons for NonRenewal 407.7.2 Ad-hoc committee to review code Faculty Senate Supernumerary 402.3.1 Senate Standing Committees 402.12.1(2)(b) PRPC Vice Chai

    Professional Responsibilities and Procedures Committee Minutes, April 14, 2008

    Get PDF
    Outside Reviewers New Chai

    StyleVideoGAN: A Temporal Generative Model using a Pretrained StyleGAN

    Get PDF
    Generative adversarial models (GANs) continue to produce advances in terms of the visual quality of still images, as well as the learning of temporal correlations. However, few works manage to combine these two interesting capabilities for the synthesis of video content: Most methods require an extensive training dataset in order to learn temporal correlations, while being rather limited in the resolution and visual quality of their output frames. In this paper, we present a novel approach to the video synthesis problem that helps to greatly improve visual quality and drastically reduce the amount of training data and resources necessary for generating video content. Our formulation separates the spatial domain, in which individual frames are synthesized, from the temporal domain, in which motion is generated. For the spatial domain we make use of a pre-trained StyleGAN network, the latent space of which allows control over the appearance of the objects it was trained for. The expressive power of this model allows us to embed our training videos in the StyleGAN latent space. Our temporal architecture is then trained not on sequences of RGB frames, but on sequences of StyleGAN latent codes. The advantageous properties of the StyleGAN space simplify the discovery of temporal correlations. We demonstrate that it suffices to train our temporal architecture on only 10 minutes of footage of 1 subject for about 6 hours. After training, our model can not only generate new portrait videos for the training subject, but also for any random subject which can be embedded in the StyleGAN space
    corecore