627 research outputs found

    Towards music-driven procedural animation

    Get PDF
    We present our approach towards the development of a framework for the creation of music-driven procedural animations. We intend to explore the potential that elementary musical features hold for driving engaging audio-visual animations. To do so, we bring forward an integrated environment where real-time musical information is available and may be flexibly used for manipulating different aspects of a dynamic animation. In general terms, our approach consists of developing a virtual scene, populated by controllable entities, termed actors, and using scripting to define how these actors' behaviour or appearance change in response to musical information. Scripting operates by establishing associations, or mappings, between musical events, such as the ringing of notes or chords, or sound information, such as the frequency spectrum, and changes in the animation. The scenario we chose to explore is comprised of two main actors: trees and wind. Trees grow in an iterative process, and may develop leaves, while swaying in response to the wind field. The wind is represented as a vector field whose configuration and strength can be altered in real-time. Scripting then allows for synchronising these changes with musical events, providing a natural sense of harmony with the accompanying music. By having real-time access to musical information, as well as control over a reactive animation we believe to have taken a first step towards exploring a novel interdisciplinary concept with vast expressive potential.This work has been supported by national funds through FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2019

    Generation and Rendering of Interactive Ground Vegetation for Real-Time Testing and Validation of Computer Vision Algorithms

    Get PDF
    During the development process of new algorithms for computer vision applications, testing and evaluation in real outdoor environments is time-consuming and often difficult to realize. Thus, the use of artificial testing environments is a flexible and cost-efficient alternative. As a result, the development of new techniques for simulating natural, dynamic environments is essential for real-time virtual reality applications, which are commonly known as Virtual Testbeds. Since the first basic usage of Virtual Testbeds several years ago, the image quality of virtual environments has almost reached a level close to photorealism even in real-time due to new rendering approaches and increasing processing power of current graphics hardware. Because of that, Virtual Testbeds can recently be applied in application areas like computer vision, that strongly rely on realistic scene representations. The realistic rendering of natural outdoor scenes has become increasingly important in many application areas, but computer simulated scenes often differ considerably from real-world environments, especially regarding interactive ground vegetation. In this article, we introduce a novel ground vegetation rendering approach, that is capable of generating large scenes with realistic appearance and excellent performance. Our approach features wind animation, as well as object-to-grass interaction and delivers realistically appearing grass and shrubs at all distances and from all viewing angles. This greatly improves immersion, as well as acceptance, especially in virtual training applications. Nevertheless, the rendered results also fulfill important requirements for the computer vision aspect, like plausible geometry representation of the vegetation, as well as its consistence during the entire simulation. Feature detection and matching algorithms are applied to our approach in localization scenarios of mobile robots in natural outdoor environments. We will show how the quality of computer vision algorithms is influenced by highly detailed, dynamic environments, like observed in unstructured, real-world outdoor scenes with wind and object-to-vegetation interaction

    Rigid Body Dynamics of Ship Hulls via Hydrostatic Forces Calculated From FFT Ocean Height Fields

    Get PDF
    An art tool is presented that utilizes a method for simulating the motion of ships in response to hydrostatic forces on the hull from a height-field representation of an ocean surface. Other forces modeled as a PID controller aid to steer the ship and stabilize the motion. The algorithms described can be applied to 3D models of arbitrary shapes composed of polygons floating on height fields generated from a myriad of additional spectra. The performance of the method is demonstrated in simple and complex ships, and ocean surfaces of at, medium, and large waveheights

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Metamorphosis of recognition: representation of a fictional city

    Get PDF
    Cada pessoa tem a sua própria cidade, cidade imaginária que combina pormenores de locais reais, mundos do sonho ou fantasias pessoais. É a nossa própria forma de encaixarmos na realidade. Absorvemos uma variedade de estímulos e conservamos o que na realidade nos deixou um eco nas nossas almas — hábitos culturais, experiências exóticas, pessoas, sentimentos, memórias, trajectos favoritos ou somente pequenos detalhes arquitetónicos… Ligando as nossas impressões, construímos o nosso mundo. Ele já não pertence somente a um lugar ou país, é global e misturado a partir de diferentes aspectos culturais. Com cada novo contato nós naturalmente mudamos e a nossa mente muda também. Este trabalho é minha cidade imaginária. Com este projeto quero mostrar partes da minha imaginação e incentivar as pessoas a partilharem as suas cidades «reais», as quais, no meu ponto de vista, ajudarão a compreendermo-nos melhor uns aos outros. Partes da minha cidade ficcional foram espalhadas pelas ruas de Lisboa, uma vez que este lugar tem uma influência mágica em mim. A cidade ficcional é representada através de diversas peças, feitas recorrendo a técnicas variadas. No fim deste projeto descobri que a metamorfose do reconhecimento pode de fato tornar-nos conscientes de aspetos familiares das nossas vidas quotidianas quando olhamos — ou contemplamos através de, mesmo que só visualmente — uma cidade ficcional.Every person has their own city, imaginary city, that combines details of real or fictional places, of dream worlds or just personal fantasies. It´s our own way to fit to reality. We absorb lots of things and save what really has left an echo in our souls — cultural habits, exotic experiences, people, feelings, memories, favourite routes or just small architectural details… By connecting our impressions, we build our world. It no longer belongs just to one place or country, it´s global and mixed up from different cultural aspects. With every new piece we naturally change, and our mind changes as well. This work is my imaginary city. With this project I want to show parts of my imagination and encourage people to share their «real» cities, which, in my point of view, will help us understand each other better. Parts of the fictional city were spread and implicated on streets of Lisbon, as this place has a magical influence on me. The fictional city is represented through several pieces, made in different visual techniques. At the end of this project, I found that the metamorphosis of recognition can indeed make us aware of familiar aspects of our everyday lives when we look at — or travel through, even if only visually — a fictional city

    Creating Bio-adaptive Visual Cues for a Social Virtual Reality Meditation Environment

    Get PDF
    This thesis examines designing and implementing adaptive visual cues for a social virtual reality meditation environment. The system described here adapts into user’s bio- and neurofeedback and uses that data in visual cues to convey information of physiological and affective states during meditation exercises supporting two simultaneous users. The thesis shows the development process of different kinds of visual cues and attempts to pinpoint best practices, design principles and pitfalls regarding the visual cue development in this context. Also examined are the questions regarding criteria for selecting correct visual cues and how to convey information of biophysical synchronization between users. The visual cues examined here are created especially for a virtual reality environment which differs as a platform from traditional two dimensional content such as user interfaces on a computer display. Points of interests are how to embody the visual cues into the virtual reality environment so that the user experience remains immersive and the visual cues convey information correctly and in an intuitive manner
    • …
    corecore