5 research outputs found

    A functional for motion estimation of a deforming body

    Get PDF
    Assuming that the pixel values in the images are proportional to some conserved quantity, a new penalty function is defined for motion estimation of a deforming body . We use the theory of linear elasticity and the conservation laws of continuum medium to propose new constraining terms . The introduction of a deformation model gives a new interpretation of the Song and Leahy's solution [Song 911 . Examples of experiments using simulated and real images of deforming body are presented . The method is able to take into account compressible or incompressible motion according to the parameter values .Une nouvelle fonctionnelle pour l'estimation du mouvement d'objets déformables est proposée dans le cadre de l'étude des images de milieux continus où la valeur des pixels est proportionnelle à la densité d'une grandeur. Cette nouvelle fonctionnelle repose sur le concept de l'énergie de déformation élastique et les principes de conservation des milieux continus. L'introduction a priori d'un modèle physique de déformation, permet une nouvelle interprétation de la fonctionnelle proposée par Song et Leahy [Song 91]. Des résultats obtenus à partir de simulations montrent la possibilité de prise en compte du caractère compressible ou non de la déformation. Des résultats sur séquence d'images réelles sont également présentés

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Optical flow estimation preserving discontinuities : a survey

    Get PDF
    Motion estimation from image sequences is based on two assumptions : the brightness conservation assumption and the assumptio n of spatial, temporal or spatio-temporal continuity (i .e . smoothness constraint) of the apparent velocity field . The latter assumptio n holds locally, within the objects, but it results in blurring the boundaries between the projections, onto the image plane, of object s undergoing different motions . These boundaries are called motion discontinuities . The main subject of this paper is an overview of the existing techniques designed to estimate the apparent velocity fields while preserving the motion discontinuities . The firs t part deals with the methods based on an assumption which states that the motion discontinuities spatially coincide with imag e brightness boundaries . The second part reports the motion segmentation methods . The third part describes the methods whic h perform the motion field estimation while detecting the local discontinuities of the currently estimated field so as to avoid smoothin g in the areas likely to contain motion boundaries . These discontinuities are preserved by means of a line process, a robust estimato r or within an anisotropic diffusion scheme . The last part is devoted to the occlusions . The estimation errors in the occlusion area s are due to the violation of two basic assumptions : the continuity assumption and particularly the conservation assumption .L'estimation du mouvement à partir de séquences d'images bidimensionnelles s'appuie sur deux hypothèses de base : l'hypothèse de conservation de la luminance des objets au cours de leurs mouvements et l'hypothèse de continuité spatiale, temporelle ou spatio-temporelle, du champ de vitesses apparentes. Cette dernière hypothèse est valable localement, «à l'intérieur » des objets, mais elle provoque un lissage indésirable au voisinage des frontières entre les projections, dans le plan image, des objets animés de mouvements différents. Ces frontières sont appelées discontinuités de mouvement. Le sujet principal de cet article est la revue des techniques visant à estimer le champ de vitesses apparentes en évitant de lisser les discontinuités de mouvement. La première partie de l'article présente les méthodes s'appuyant sur l'hypothèse selon laquelle les discontinuités de mouvement coïncident spatialement avec certaines frontières photométriques. La deuxième partie rapporte les méthodes de segmentation du champ estimé courant en régions homogènes au sens du mouvement. La troisième partie s'intéresse aux méthodes qui réalisent simultanément l'estimation du champ de vitesses et la détection des ruptures locales de continuité de ce champ. Ceci permet d'inhiber localement le lissage, dans les zones susceptibles de contenir une discontinuité de mouvement. Cette inhibition peut être obtenue par introduction d'un « processus de ligne » binaire, grâce à l'utilisation d'un «estimateur robuste », ou dans un schéma de diffusion anisotrope. La dernière partie est consacrée à la gestion des occultations. Les estimations obtenues dans les zones d'occultation sont erronées à cause de la violation des deux hypothèses de base : conservation des propriétés photométriques et continuité

    Estimation of visual motion in image sequences

    Get PDF
    corecore