8 research outputs found

    Modeling and generating moving trees from video

    Get PDF
    We present a probabilistic approach for the automatic production of tree models with convincing 3D appearance and motion. The only input is a video of a moving tree that provides us an initial dynamic tree model, which is used to generate new individual trees of the same type. Our approach combines global and local constraints to construct a dynamic 3D tree model from a 2D skeleton. Our modeling takes into account factors such as the shape of branches, the overall shape of the tree, and physically plausible motion. Furthermore, we provide a generative model that creates multiple trees in 3D, given a single example model. This means that users no longer have to make each tree individually, or specify rules to make new trees. Results with different species are presented and compared to both reference input data and state of the art alternatives

    Hierarchical retargetting of 2D motion fields to the animation of 3D plant models

    Get PDF
    International audienceThe complexity of animating trees, shrubs and foliage is an impediment to the efficient and realistic depiction of natural environments. This paper presents an algorithm to extract, from a single video sequence, motion fields of real shrubs under the influence of wind, and to transfer this motion to the animation of complex, synthetic 3D plant models. The extracted motion is retargeted without requiring physical simulation. First, feature tracking is applied to the video footage, allowing the 2D position and velocity of automatically identified features to be clustered. A key contribution of the method is that the hierarchy obtained through statistical clustering can be used to synthesize a 2D hierarchical geometric structure of branches that terminates according to the cut-off threshold of a classification algorithm. This step extracts both the shape and the motion of a hierarchy of features groups that are identified as geometrical branches. The 2D hierarchy is then extended to three dimensions using the estimated spatial distribution of the features within each group. Another key contribution is that this 3D hierarchical structure can be efficiently used as a motion controller to animate any complex 3D model of similar but non-identical plants using a standard skinning algorithm. Thus, a single video source of a moving shrub becomes an input device for a large class of virtual shrubs. We illustrate the results on two examples of shrubs and one outdoor tree. Extensions to other outdoor plants are discussed

    Cloth in the Wind: A Case Study of Physical Measurement through Simulation

    Get PDF
    For many of the physical phenomena around us, we have developed sophisticated models explaining their behavior. Nevertheless, measuring physical properties from visual observations is challenging due to the high number of causally underlying physical parameters -- including material properties and external forces. In this paper, we propose to measure latent physical properties for cloth in the wind without ever having seen a real example before. Our solution is an iterative refinement procedure with simulation at its core. The algorithm gradually updates the physical model parameters by running a simulation of the observed phenomenon and comparing the current simulation to a real-world observation. The correspondence is measured using an embedding function that maps physically similar examples to nearby points. We consider a case study of cloth in the wind, with curling flags as our leading example -- a seemingly simple phenomena but physically highly involved. Based on the physics of cloth and its visual manifestation, we propose an instantiation of the embedding function. For this mapping, modeled as a deep network, we introduce a spectral layer that decomposes a video volume into its temporal spectral power and corresponding frequencies. Our experiments demonstrate that the proposed method compares favorably to prior work on the task of measuring cloth material properties and external wind force from a real-world video.Comment: CVPR 2020. arXiv admin note: substantial text overlap with arXiv:1910.0786

    A framework for cardio-pulmonary resuscitation (CPR) scene retrieval from medical simulation videos based on object and activity detection.

    Get PDF
    In this thesis, we propose a framework to detect and retrieve CPR activity scenes from medical simulation videos. Medical simulation is a modern training method for medical students, where an emergency patient condition is simulated on human-like mannequins and the students act upon. These simulation sessions are recorded by the physician, for later debriefing. With the increasing number of simulation videos, automatic detection and retrieval of specific scenes became necessary. The proposed framework for CPR scene retrieval, would eliminate the conventional approach of using shot detection and frame segmentation techniques. Firstly, our work explores the application of Histogram of Oriented Gradients in three dimensions (HOG3D) to retrieve the scenes containing CPR activity. Secondly, we investigate the use of Local Binary Patterns in Three Orthogonal Planes (LBPTOP), which is the three dimensional extension of the popular Local Binary Patterns. This technique is a robust feature that can detect specific activities from scenes containing multiple actors and activities. Thirdly, we propose an improvement to the above mentioned methods by a combination of HOG3D and LBP-TOP. We use decision level fusion techniques to combine the features. We prove experimentally that the proposed techniques and their combination out-perform the existing system for CPR scene retrieval. Finally, we devise a method to detect and retrieve the scenes containing the breathing bag activity, from the medical simulation videos. The proposed framework is tested and validated using eight medical simulation videos and the results are presented

    Segmentation based variational model for accurate optical flow estimation.

    Get PDF
    Chen, Jianing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 47-54).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Related Work --- p.3Chapter 1.3 --- Thesis Organization --- p.5Chapter 2 --- Review on Optical Flow Estimation --- p.6Chapter 2.1 --- Variational Model --- p.6Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6Chapter 2.1.2 --- More General Energy Functional --- p.9Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9Chapter 2.2.1 --- Data Term Robustification --- p.10Chapter 2.2.2 --- Diffusion Based Regularization --- p.11Chapter 2.2.3 --- Segmentation --- p.15Chapter 2.3 --- Chapter Summary --- p.15Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17Chapter 3.1 --- Initial Flow --- p.17Chapter 3.2 --- Color-Motion Segmentation --- p.19Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21Chapter 3.4 --- Confidence Map Construction --- p.24Chapter 3.4.1 --- Occlusion detection --- p.24Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24Chapter 3.4.3 --- Segment-wise model confidence --- p.26Chapter 3.5 --- Final Combined Variational Model --- p.28Chapter 3.6 --- Chapter Summary --- p.28Chapter 4 --- Experiment Results --- p.30Chapter 4.1 --- Quantitative Evaluation --- p.30Chapter 4.2 --- Warping Results --- p.34Chapter 4.3 --- Chapter Summary --- p.35Chapter 5 --- Application - Single Image Animation --- p.37Chapter 5.1 --- Introduction --- p.37Chapter 5.2 --- Approach --- p.38Chapter 5.2.1 --- Pre-Process Stage --- p.39Chapter 5.2.2 --- Coordinate Transform --- p.39Chapter 5.2.3 --- Motion Field Transfer --- p.41Chapter 5.2.4 --- Motion Editing and Apply --- p.41Chapter 5.2.5 --- Gradient-domain composition --- p.42Chapter 5.3 --- Experiments --- p.43Chapter 5.3.1 --- Active Motion Transfer --- p.43Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44Chapter 5.4 --- Chapter Summary --- p.45Chapter 6 --- Conclusion --- p.46Bibliography --- p.4

    Video input driven animation (VIDA

    No full text
    There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. Can we ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass? The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats. 1
    corecore