288 research outputs found

    Displacement constraints for interactive modeling and animation of articulated structures

    Get PDF
    PublishedInternational audienceThis paper presents an integrated set of methods for the automatic construction and interactive animation of solid systems that satisfy specified geometric constraints. Displacement contraints enable the user to design articulated bodies with various degrees of freedom in rotation or in translation at highes and to restrict the scope of the movement at will. The graph of constrained objects may contain closed loops. The animation is achieved by decoupling the free motion of each solid component from the action of the constraints. We do this with iterative tunings in displacements. The method is currently implemented in a dynamically based animation system and takes the physical parameters into account while reestablishing the constraints. In particular, first-order momenta are preserved during this process. The approach would be easy to extend to modeling systems or animation modules without a physical model just by allowing the user to control more parameters. (source Springer

    Implicit Patches: An Optimized and Powerful Ray Intersection Algorithm for Implicit Surfaces

    Get PDF
    International audienceThis paper describes an new and optimized direct ray tracing algorithm over complex implicit surfaces generated by skeletons. Its main originality is its ability to avoid unwanted blending between parts of the same object, thank to the portioning of the surface into several pieces, so called Implicit Patches. Moreover, these patches enable to exploit the properties of local field functions, and to speed-up considerably the rendering. Extensive statistics of the various optimizations proposed are given and discussed. The implantation into public domain software RayShade is sketched

    Reactive Balance Control in Immersive Visual Flows: 2D vs. 3D Virtual Stimuli

    Get PDF
    Poster at 14th Annual CyberTherapy and CyberPsychology Conference - http://online.liebertpub.com/toc/cpb/12/5International audienceThe aim is to study the effects of 2D vs. 3D visual inputs on balance control. Ten subjects were fully immersed in virtual environment, using 10 different 2D/3D motion flow conditions. Analysis of visual and postural responses shows significant differences of reactivity in 3D versus 2D

    Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    Get PDF
    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira-Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira-Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based method

    Reactive Ocular and Balance Control in Immersive Visual Flows: 2D vs. 3D Virtual Stimuli

    Get PDF
    Short paper - Section III. Original Research - Also published in Advanced Technologies in the Behavioral Social and Neurosciences (vol. 7): http://fr.scribd.com/doc/17030552)International audienceThe control of gaze and balance strongly depend on the processing of visual cues. The aim of this study is to assess the effects of the dynamic 2D and 3D visual inputs on the oculomotor and balance reactive control. Thirteen subjects were immersed in a virtual environment using 10 different 2D/3D visual flow conditions. Analysis of eye movement and postural adjustments shows that 2D and 3D flows induce specific measurable behavioral responses

    Environnement de Réalité Augmentée Collaboratif : Manipulation d'Objets Réels et Virtuels

    Get PDF
    National audienceLa réalité augmentée est un outil indispensable pour certaines situations collaboratives. Nous proposons un environnement multi-utilisateurs, multi-périphériques dédié aux applications collaboratives. Il repose sur une architecture modulaire et facilement configurable pour une mise en place rapide d'une session de travail. Cette plate-forme est utilisée pour démontrer des techniques d'interactions très intuitives pour la manipulation d'objets réels et virtuels et la visualisation à plusieurs échelles. On introduit également des méthodes pour ajouter de façon dynamique des objets réels et virtuels, qui s'intègrent alors dans le même espace. Basé sur des concepts de vue privée et de droits d'accès aux éléments de la scène, l'environnement gère l'interaction multi-utilisateurs

    Soundtracks for Computer Animation : Sound Rendering in Dynamic Environments with Occlusions

    Get PDF
    International audienceWith the development of virtual reality systems and multi-modal simulations, soundtrack generation is becoming an important issue in computer graphics. In the context of computer generated animation, many more parameters than the sole object geometry as well as specific events can be used to generate, control and render a soundtrack that fits the object motions. Producing a convincing soundtrack involves the rendering of the interactions of sound with the dynamic environment : in particular sound reflections and sound absorption due to partial occlusions, usually implying an unacceptable computational cost. We present an integrated approach to sound and image rendering in a computer animation context, which allows the animator to recreate the process of sound recording, while ``physical effects" are automatically computed. Moreover, our sound rendering process efficiently combines a sound reflection model and an attenuation model due to scattering/diffraction by partial occluders, through the use of graphics hardware allowing interactive computation rates.Avec le développement des systèmes de réalité virtuelle et des simulations multi-modales, générer une bande son devient un problème qui touche de plus en plus le milieu de l'informatique graphique. Dans le contexte d'une animation de synthèse, de nombreux paramètres et évènements peuvent être utilisés pour générer, contrôler et effectuer le rendu d'une bande son accompagnant l'action. Produire une bande son convaincante implique toutefois le rendu des interactions du son avec l'environnement dynamique, en particulier ses multiples réflections et l'absorption due aux occlusions partielles, nécessitant habituellement des calculs couteux. Nous présentons une approche pour le rendu simultané du son et de l'image dans un contexte d'animation de synthèse permettant à l'animateur de recréer un processus de prise de son virtuelle, les calculs d'effets étant réalisés automatiquement. Notre processus de rendu permet de prendre en compte, en temps interactif, les réflections du son ainsi qu'un modèle de dispersion/diffraction due aux occlusions partielles en utilisant le hardware spécialisé des stations grahiques

    Réalité Augmentée et Environnement Collaboratif : Un Tour d'Horizon

    Get PDF
    National audienceLa Réalité Augmentée (RA) est généralement définie comme une branche dérivée de la Réalité Virtuelle. D'une façon plus générale, le concept de réalité augmentée regroupe une approche multidisciplinaire visant un mélange entre réel et virtuel. La forte potentialité induite par cette connexion promet un cadre adéquat pour l'interaction 3D ou les applications collaboratives. On présente dans cet article un tour d'horizon des principaux travaux menés à ce jour dans le cadre de l'image et de la RA et plus particulièrement le cadre collaboratif

    ECHO & NarSYS - An accoustic modeler and sound renderer

    Get PDF
    International audienceComputer graphics simulations are now widely used in the field of environmental modelling, for example to evaluate the visual impact of an architectural project on its environment and interactively change its design. Realistic sound simulation is equally important for environmental modelling. At iMAGIS, a joint project of INRIA, CNRS, Joseph Fourier University and the Institut National Polytechnique of Grenoble, we are currently developing an integrated interactive acoustic modelling and sound rendering system for virtual environments. The aim of the system is to provide an interactive simulation of global sound propagation in a given environment and an integrated sound/computer graphics rendering to obtain computer simulated movies of the environment with realistic and coherent soundtracks
    • …
    corecore