246 research outputs found

    VRENT: a new interactive & immersive rental space experience

    Get PDF
    According to the National Center for Educational Statistics, there are more than 18.2 million college students in the United States, including both out- of-state students and international students (Drew, 2013). While trying to find a space to live is a difficult process for anyone, it is especially painful for international students. Housing agencies are also tasked with providing accurate information to potential tenants coming from different cultural backgrounds, which can be challenging on their part. This project sought to not only solve this problem, but also enhance the experience of house- hunting for international students. The overall purpose of this project was to solve this problem by exploring different ways to enhance the house hunting experience so target audiences can easily find a rental space and have a better understanding of the scope of the space. Aside from this, this project also explored how design and technology can solve the painful problems users face in their real lives. This thesis project covered various aspects of the design process from research and analysis to user surveys. This thesis also covered various forms of design ideation, such as wire-framing and storyboarding. This project aimed to integrate user experience (UX) methods, user interfaces (UI), and virtual reality technology to create an interactive and immersive experience

    Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks

    Full text link
    Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128×128128\times 128 resolution for 32 frames. Quantitative and qualitative experiment results have demonstrated the superiority of our model over the state-of-the-art models.Comment: To appear in Proceedings of CVPR 201

    SuperChat: Dialogue Generation by Transfer Learning from Vision to Language using Two-dimensional Word Embedding and Pretrained ImageNet CNN Models

    Full text link
    The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach. This paper borrows the idea of Super Characters method and two-dimensional embedding, and proposes a method of generating conversational response for open domain dialogues. The experimental results on a public dataset shows that the proposed SuperChat method generates high quality responses. An interactive demo is ready to show at the workshop.Comment: 5 pages, 2 figures, 1 table. Accepted by CVPR2019 Language and Vision Worksho
    • …
    corecore