337 research outputs found

    View-dependent dynamics of articulated bodies

    Get PDF
    Special Issue: CASA'2008 Special IssueInternational audienceWe propose a method for view-dependent simplification of articulated-body dynamics, which enables an automatic trade-off between visual precision and computational efficiency. We begin by discussing the problem of simplifying the simulation based on visual criteria, and show that it raises a number of challenging questions. We then focus on articulated-body dynamics simulation, and propose a semi-predictive approach which relies on a combination of exact, a priori error metrics computations, and visibility estimations. We suggest several variants of semi-predictive metrics based on hierarchical data structures and the use of graphics hardware, and discuss their relative merits in terms of computational efficiency and precision. Finally, we present several benchmarks and demonstrate how our view-dependent articulated-body dynamics method allows an animator (or a physics engine) to finely tune the visual quality and obtain potentially significant speedups during interactive or off-line simulations

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    A phase-indexed tracking controller for interactive physical simulation of animated characters

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 103-107).In this thesis, I describe a method of animating characters using physical simulation. The main advantage of this approach, verses traditional keyframing methods, is that the animated character can react to physical interactions. These reactions can be synthesized in real-time in interactive applications, such as video games, where traditional approaches can only playback pre-recorded sequences. Physically simulating a character requires a controller, but creating a controller is known to be a challenging task, especially when animation concerns about the style of the motion are taken into consideration. This thesis describes a method of generating a controller automatically and quickly from an input motion. The stylistic aspects of the controller are particularly easy to control, as they are a direct result of the input motion. In order to generate a controller from an input motion, I address two main challenges. First, the input motion must be rectified (minimally modified) to ensure that it is physically plausible. Second, a feedback strategy must be formulated to generate control forces during the simulation. The motion rectification problem is addressed by formulating a fast trajectory optimization that solves for a reference motion. The reference minimally deviates from the input motion to satisfy physical constraints. The second challenge is addressed by employing a novel phase-indexed controller that uses a combination of local and global feedback strategies to keep the character tracking the reference motion. Beyond tracking just a single reference motion, I also demonstrate how variation to a input motion can be automatically synthesized using the same trajectory optimization method used in the rectification process, and how these variations can be sequenced, using optimal control, to accomplish various goals.by Yeuhi Abe.Ph.D

    Perceptually Driven Simulation

    Get PDF
    This dissertation describes, implements and analyzes a comprehensive system for perceptually-driven virtual reality simulation, based on algorithms which dynamically adjust level of detail (LOD) for entity simulation in order to maximize simulation realism as perceived by the viewer. First we review related work in simulation LOD, and describe the weaknesses of the analogy that has traditionally been drawn between simulation LOD and graphical LOD. We describe the process of perceptual criticality modeling for quantitatively estimating the relative importance of different entities in maintaining perceived realism and predicting the consequences of LOD transitions on perceived realism. We present heuristic cognitive models of human perception, memory, and attention to perform this modeling. We then propose the LOD Trader , a framework for perceptually driven LOD selection and an online approximation algorithm for efficiently identifying useful LOD transitions. We then describe alibi generation , a method of retroactively elaborating a human agent\u27s behavior to maintain its realism under prolonged scrutiny from the viewer, and discuss its integration into a heterogeneous perceptually driven simulation. We then present the Marketplace simulation system and describe how perceptually driven simulation techniques were used to maximize perceived realism, and evaluate their success in doing so. Finally, we summarize the dissertation work performed and its expected contributions to real-time modeling and simulation environments

    Interactive physically-based sound simulation

    Get PDF
    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes
    • …
    corecore