8 research outputs found

    Robust image metamorphosis immune from ghost and blur

    No full text
    In this paper, we propose a novel method for the metamorphosis between two different images. By the approach, the transition sequence is generated by stitching two forward and backward warped sequences in a three-dimensional space along transition surface. In contrast to the traditional methods by blending two warped images at each intermediate frame, we continuously warp images on opposite direction without blending until the two warped images match in a three-dimensional space leading to a better transition in quality. Furthermore, for each pixel, we make decision of choosing a given input image best suitable so as to produce plausible in-between images to prevent from ghost and blur. By our scheme, the transition surface is computed by minimizing an energy function in terms of graph-cut optimization. Depending on the transition surface, a warp function is proposed to create a smooth and clear transformation. We demonstrate the advantage of our framework by performing transformation test to various kinds of image couples. © 2012 Springer-Verlag.In this paper, we propose a novel method for the metamorphosis between two different images. By the approach, the transition sequence is generated by stitching two forward and backward warped sequences in a three-dimensional space along transition surface. In contrast to the traditional methods by blending two warped images at each intermediate frame, we continuously warp images on opposite direction without blending until the two warped images match in a three-dimensional space leading to a better transition in quality. Furthermore, for each pixel, we make decision of choosing a given input image best suitable so as to produce plausible in-between images to prevent from ghost and blur. By our scheme, the transition surface is computed by minimizing an energy function in terms of graph-cut optimization. Depending on the transition surface, a warp function is proposed to create a smooth and clear transformation. We demonstrate the advantage of our framework by performing transformation test to various kinds of image couples. © 2012 Springer-Verlag

    Real-time and robust hand tracking with a single depth camera

    No full text
    In this paper, we introduce a novel, real-time and robust hand tracking system, capable of tracking the articulated hand motion in full degrees of freedom (DOF) using a single depth camera. Unlike most previous systems, our system is able to initialize and recover from tracking loss automatically. This is achieved through an efficient two-stage k-nearest neighbor database searching method proposed in the paper. It is effective for searching from a pre-rendered database of small hand depth images, designed to provide good initial guesses for model based tracking. We also propose a robust objective function, and improve the Particle Swarm Optimization algorithm with a resampling based strategy in model based tracking. It provides continuous solutions in full DOF hand motion space more efficiently than previous methods. Our system runs at 40 fps on a GeForce GTX 580 GPU and experimental results show that the system outperforms the state-of-the-art model based hand tracking systems in terms of both speed and accuracy. The work result is of significance to various applications in the field of human-computer-interaction and virtual reality. © 2013 Springer-Verlag Berlin Heidelberg.In this paper, we introduce a novel, real-time and robust hand tracking system, capable of tracking the articulated hand motion in full degrees of freedom (DOF) using a single depth camera. Unlike most previous systems, our system is able to initialize and recover from tracking loss automatically. This is achieved through an efficient two-stage k-nearest neighbor database searching method proposed in the paper. It is effective for searching from a pre-rendered database of small hand depth images, designed to provide good initial guesses for model based tracking. We also propose a robust objective function, and improve the Particle Swarm Optimization algorithm with a resampling based strategy in model based tracking. It provides continuous solutions in full DOF hand motion space more efficiently than previous methods. Our system runs at 40 fps on a GeForce GTX 580 GPU and experimental results show that the system outperforms the state-of-the-art model based hand tracking systems in terms of both speed and accuracy. The work result is of significance to various applications in the field of human-computer-interaction and virtual reality. © 2013 Springer-Verlag Berlin Heidelberg

    Real-time generation of smoothed-particle hydrodynamics-based special effects in character animation

    No full text
    In the previous works, the real-time fluid-character animation could hardly be achieved because of the intensive processing demand on the character's movement and fluid simulation. This paper presents an effective approach to the real-time generation of the fluid flow driven by the motion of a character in full 3D space, based on smoothed-particle hydrodynamics method. The novel method of conducting and constraining the fluid particles by the geometric properties of the character motion trajectory is introduced. Furthermore, the optimized algorithms of particle searching and rendering are proposed, by taking advantage of the graphics processing unit parallelization. Consequently, both simulation and rendering of the 3D liquid effects with realistic character interactions can be implemented by our framework and performed in real-time on a conventional PC. Copyright © 2013 John Wiley & Sons, Ltd.In the previous works, the real-time fluid-character animation could hardly be achieved because of the intensive processing demand on the character's movement and fluid simulation. This paper presents an effective approach to the real-time generation of the fluid flow driven by the motion of a character in full 3D space, based on smoothed-particle hydrodynamics method. The novel method of conducting and constraining the fluid particles by the geometric properties of the character motion trajectory is introduced. Furthermore, the optimized algorithms of particle searching and rendering are proposed, by taking advantage of the graphics processing unit parallelization. Consequently, both simulation and rendering of the 3D liquid effects with realistic character interactions can be implemented by our framework and performed in real-time on a conventional PC. Copyright © 2013 John Wiley & Sons, Ltd

    Dynamic BFECC Characteristic Mapping method for fluid simulations

    No full text
    In this paper, we present a new numerical method for advection in fluid simulation. The method is built on the Characteristic Mapping method. Advection is solved via grid mapping function. The mapping function is maintained with higher order accuracy BFECC method and dynamically reset to identity mapping whenever an error criterion is met. Dealing with mapping function in such a way results in a more accurate mapping function and more details can be captured easily with this mapping function. Our error criterion also allows one to control the level of details of fluid simulation by simply adjusting one parameter. Details of implementation of our method are discussed and we present several techniques for improving its efficiency. Both quantitative and visual experiments were performed to test our method. The results show that our method brings significant improvement in accuracy and is efficient in capturing fluid details. © 2014 Springer-Verlag Berlin Heidelberg.In this paper, we present a new numerical method for advection in fluid simulation. The method is built on the Characteristic Mapping method. Advection is solved via grid mapping function. The mapping function is maintained with higher order accuracy BFECC method and dynamically reset to identity mapping whenever an error criterion is met. Dealing with mapping function in such a way results in a more accurate mapping function and more details can be captured easily with this mapping function. Our error criterion also allows one to control the level of details of fluid simulation by simply adjusting one parameter. Details of implementation of our method are discussed and we present several techniques for improving its efficiency. Both quantitative and visual experiments were performed to test our method. The results show that our method brings significant improvement in accuracy and is efficient in capturing fluid details. © 2014 Springer-Verlag Berlin Heidelberg

    Animating turbulent water by vortex shedding in PIC/FLIP

    No full text
    In this paper, we present a hybrid method, which integrates PIC/FLIP and vortex particle methods into a unified framework, to efficiently simulate vortex shedding that happens when fluids flow around internal obstacles. To improve efficiency and reduce the numerical dissipations, we first solve the governing equations on a coarse grid using PIC/FLIP, and then interpolate the intermediate results to a finer grid to obtain the base flow. When the regular particles in PIC/FLIP enter the boundary layer, if the specified conditions are satisfied to cause vortex shedding, they are selected as vortex particles by assigning additional vorticity related attributes. The vortex particle dynamics are controlled by the vorticity form of NS equations, and several efficient methods are proposed to solve them on the finer grid. Finally, the obtained turbulence flow is added to the base flow. As a result, we are able to simulate turbulent water with rich wake details around the internal obstacles. © 2013 Science China Press and Springer-Verlag Berlin Heidelberg.In this paper, we present a hybrid method, which integrates PIC/FLIP and vortex particle methods into a unified framework, to efficiently simulate vortex shedding that happens when fluids flow around internal obstacles. To improve efficiency and reduce the numerical dissipations, we first solve the governing equations on a coarse grid using PIC/FLIP, and then interpolate the intermediate results to a finer grid to obtain the base flow. When the regular particles in PIC/FLIP enter the boundary layer, if the specified conditions are satisfied to cause vortex shedding, they are selected as vortex particles by assigning additional vorticity related attributes. The vortex particle dynamics are controlled by the vorticity form of NS equations, and several efficient methods are proposed to solve them on the finer grid. Finally, the obtained turbulence flow is added to the base flow. As a result, we are able to simulate turbulent water with rich wake details around the internal obstacles. © 2013 Science China Press and Springer-Verlag Berlin Heidelberg

    Multi-resolution shadow mapping using CUDA rasterizer

    No full text
    Shadow mapping is a fast and easy to use method to produce hard shadows. However, it introduces aliasing due to its uniform sampling strategy and limited shadow map resolution. In this paper, we propose a memory efficient algorithm to render high quality shadows. Our algorithm is based on a multi-resolution shadow map structure, which includes a conventional shadow map for scene regions where a low-resolution shadow map is sufficient, and a high-resolution patch buffer to capture scene regions that are susceptible to aliasing. With this data structure, we are able to capture shadow details with far less memory footprint than conventional shadow mapping. In order to maintain an appropriate performance compared to conventional shadow mapping, we designed a customized CUDA rasterizer to render the high-resolution patches. © 2013 IEEE.Shadow mapping is a fast and easy to use method to produce hard shadows. However, it introduces aliasing due to its uniform sampling strategy and limited shadow map resolution. In this paper, we propose a memory efficient algorithm to render high quality shadows. Our algorithm is based on a multi-resolution shadow map structure, which includes a conventional shadow map for scene regions where a low-resolution shadow map is sufficient, and a high-resolution patch buffer to capture scene regions that are susceptible to aliasing. With this data structure, we are able to capture shadow details with far less memory footprint than conventional shadow mapping. In order to maintain an appropriate performance compared to conventional shadow mapping, we designed a customized CUDA rasterizer to render the high-resolution patches. © 2013 IEEE

    Accurate and efficient cross-domain visual matching leveraging multiple feature representations

    No full text
    Cross-domain visual matching aims at finding visually similar images across a wide range of visual domains, and has shown a practical impact on a number of applications. Unfortunately, the state-of-the-art approach, which estimates the relative importance of the single feature dimensions still suffers from low matching accuracy and high time cost. To this end, this paper proposes a novel cross-domain visual matching framework leveraging multiple feature representations. To integrate the discriminative power of multiple features, we develop a data-driven, query specific feature fusion model, which estimates the relative importance of the individual feature dimensions as well as the weight vector among multiple features simultaneously. Moreover, to alleviate the computational burden of an exhaustive subimage search, we design a speedup scheme, which employs hyperplane hashing for rapidly collecting the hard-negatives. Extensive experiments carried out on various matching tasks demonstrate that the proposed approach outperforms the state-of-the-art in both accuracy and efficiency. © 2013 Springer-Verlag Berlin Heidelberg.Cross-domain visual matching aims at finding visually similar images across a wide range of visual domains, and has shown a practical impact on a number of applications. Unfortunately, the state-of-the-art approach, which estimates the relative importance of the single feature dimensions still suffers from low matching accuracy and high time cost. To this end, this paper proposes a novel cross-domain visual matching framework leveraging multiple feature representations. To integrate the discriminative power of multiple features, we develop a data-driven, query specific feature fusion model, which estimates the relative importance of the individual feature dimensions as well as the weight vector among multiple features simultaneously. Moreover, to alleviate the computational burden of an exhaustive subimage search, we design a speedup scheme, which employs hyperplane hashing for rapidly collecting the hard-negatives. Extensive experiments carried out on various matching tasks demonstrate that the proposed approach outperforms the state-of-the-art in both accuracy and efficiency. © 2013 Springer-Verlag Berlin Heidelberg
    corecore