1,518 research outputs found

    Objective View Synthesis Quality Assessment

    Get PDF
    International audienceView synthesis brings geometric distortions which are not handled efficiently by existing image quality assessment metrics. Despite the widespread of 3-D technology and notably 3D television (3DTV) and free-viewpoints television (FTV), the field of view synthesis quality assessment has not yet been widely investigated and new quality metrics are required. In this study, we propose a new full-reference objective quality assessment metric: the View Synthesis Quality Assessment (VSQA) metric. Our method is dedicated to artifacts detection in synthesized view-points and aims to handle areas where disparity estimation may fail: thin objects, object borders, transparency, variations of illumination or color differences between left and right views, periodic objects... The key feature of the proposed method is the use of three visibility maps which characterize complexity in terms of textures, diversity of gradient orientations and presence of high contrast. Moreover, the VSQA metric can be defined as an extension of any existing 2D image quality assessment metric. Experimental tests have shown the effectiveness of the proposed method

    Disparity-compensated view synthesis for s3D content correction

    Get PDF
    International audienceThe production of stereoscopic 3D HD content is considerably increasing and experience in 2-view acquisition is in progress. High quality material to the audience is required but not always ensured, and correction of the stereo views may be required. This is done via disparity-compensated view synthesis. A robust method has been developed dealing with these acquisition problems that introduce discomfort (e.g hyperdivergence and hyperconvergence...) as well as those ones that may disrupt the correction itself (vertical disparity, color difference between views...). The method has three phases: a preprocessing in order to correct the stereo images and estimate features (e.g. disparity range...) over the sequence. The second (main) phase proceeds then to disparity estimation and view synthesis. Dual disparity estimation based on robust block-matching, discontinuity-preserving filtering, consistency and occlusion handling has been developed. Accurate view synthesis is carried out through disparity compensation. Disparity assessment has been introduced in order to detect and quantify errors. A post-processing deals with these errors as a fallback mode. The paper focuses on disparity estimation and view synthesis of HD images. Quality assessment of synthesized views on a large set of HD video data has proved the effectiveness of our method

    Estimation de mouvement dense entre images distantes : intégration combinatoire multi-steps et sélection statistique

    Get PDF
    National audiencePour traiter le problème de la mise en correspondance dense entre images distantes, nous proposons une méthode d'intégration combinatoire multi-steps permettant de construire un grand ensemble de champs de mouvement candidats via de multiples chemins de mouvement. Une sélection du champ optimal est ensuite réalisée en utilisant, en plus des techniques d'optimisation globale couramment utilisées, un traitement statistique exploitant la densité spatiale des candidats ainsi que leur cohérence forward-backward. Les expériences réalisées dans le domaine de l'édition vidéo montrent les bonnes performances que notre méthode permet d'obtenir

    Dense long-term motion estimation via Statistical Multi-Step Flow

    Get PDF
    International audienceWe present statistical multi-step flow, a new approach for dense motion estimation in long video sequences. Towards this goal, we propose a two-step framework including an initial dense motion candidates generation and a new iterative motion refinement stage. The first step performs a combinatorial integration of elementary optical flows combined with a statistical candidate displacement fields selection and focuses especially on reducing motion inconsistency. In the second step, the initial estimates are iteratively refined considering several motion candidates including candidates obtained from neighboring frames. For this refinement task, we introduce a new energy formulation which relies on strong temporal smoothness constraints. Experiments compare the proposed statistical multi-step flow approach to state-of-the-art methods through both quantitative assessment using the Flag benchmark dataset and qualitative assessment in the context of video editing

    Dense motion estimation between distant frames: combinatorial multi-step integration and statistical selection

    Get PDF
    International audienceAccurate estimation of dense point correspondences between two distant frames of a video sequence is a challenging task. To address this problem, we present a combinatorial multistep integration procedure which allows one to obtain a large set of candidate motion fields between the two distant frames by considering multiple motion paths across the video sequence. Given this large candidate set, we propose to perform the optimal motion vector selection by combining a global optimization stage with a new statistical processing. Instead of considering a selection only based on intrinsic motion field quality and spatial regularization, the statistical processing exploits the spatial distribution of candidates and introduces an intra-candidate quality based on forward-backward consistency. Experiments evaluate the effectiveness of our method for distant motion estimation in the context of video editing

    Dense motion estimation between distant frames: combinatorial multi-step integration and statistical selection

    Get PDF
    International audienceAccurate estimation of dense point correspondences between two distant frames of a video sequence is a challenging task. To address this problem, we present a combinatorial multistep integration procedure which allows one to obtain a large set of candidate motion fields between the two distant frames by considering multiple motion paths across the video sequence. Given this large candidate set, we propose to perform the optimal motion vector selection by combining a global optimization stage with a new statistical processing. Instead of considering a selection only based on intrinsic motion field quality and spatial regularization, the statistical processing exploits the spatial distribution of candidates and introduces an intra-candidate quality based on forward-backward consistency. Experiments evaluate the effectiveness of our method for distant motion estimation in the context of video editing

    Multi-step flow fusion: towards accurate and dense correspondences in long video shots

    Get PDF
    International audienceThe aim of this work is to estimate dense displacement fields over long video shots. Put in sequence they are useful for representing point trajectories but also for propagating (pulling) information from a reference frame to the rest of the video. Highly elaborated optical flow estimation algorithms are at hand, and they were applied before for dense point tracking by simple accumulation, however with unavoidable position drift. On the other hand, direct long-term point matching is more robust to such deviations, but it is very sensitive to ambiguous correspondences. Why not combining the benefits of both approaches? Following this idea, we develop a multi-step flow fusion method that optimally generates dense long-term displacement fields by first merging several candidate estimated paths and then filtering the tracks in the spatio-temporal domain. Our approach permits to handle small and large displacements with improved accuracy and it is able to recover a trajectory after temporary occlusions. Especially useful for video editing applications, we attack the problem of graphic element insertion and video volume segmentation, together with a number of quantitative comparisons on ground-truth data with state-of-the-art approaches

    Scaling laws and vortex profiles in 2D decaying turbulence

    Full text link
    We use high resolution numerical simulations over several hundred of turnover times to study the influence of small scale dissipation onto vortex statistics in 2D decaying turbulence. A self-similar scaling regime is detected when the scaling laws are expressed in units of mean vorticity and integral scale, as predicted by Carnevale et al., and it is observed that viscous effects spoil this scaling regime. This scaling regime shows some trends toward that of the Kirchhoff model, for which a recent theory predicts a decay exponent ξ=1\xi=1. In terms of scaled variables, the vortices have a similar profile close to a Fermi-Dirac distribution.Comment: 4 Latex pages and 4 figures. Submitted to Phys. Rev. Let

    Ethical and Clinical Aspects of Intensive Care Unit Admission in Patients with Hematological Malignancies: Guidelines of the Ethics Commission of the French Society of Hematology

    Get PDF
    Admission of patients with hematological malignancies to intensive care unit (ICU) raises recurrent ethical issues for both hematological and intensivist teams. The decision of transfer to ICU has major consequences for end of life care for patients and their relatives. It also impacts organizational human and economic aspects for the ICU and global health policy. In light of the recent advances in hematology and critical care medicine, a wide multidisciplinary debate has been conducted resulting in guidelines approved by consensus by both disciplines. The main aspects developed were (i) clarification of the clinical situations that could lead to a transfer to ICU taking into account the severity criteria of both hematological malignancy and clinical distress, (ii) understanding the process of decision-making in a context of regular interdisciplinary concertation involving the patient and his relatives, (iii) organization of a collegial concertation at the time of the initial decision of transfer to ICU and throughout and beyond the stay in ICU. The aim of this work is to propose suggestions to strengthen the collaboration between the different teams involved, to facilitate the daily decision-making process, and to allow improvement of clinical practice

    The ASTRO-H X-ray Observatory

    Full text link
    The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated by the Institute of Space and Astronautical Science (ISAS). ASTRO-H will investigate the physics of the high-energy universe via a suite of four instruments, covering a very wide energy range, from 0.3 keV to 600 keV. These instruments include a high-resolution, high-throughput spectrometer sensitive over 0.3-2 keV with high spectral resolution of Delta E < 7 eV, enabled by a micro-calorimeter array located in the focal plane of thin-foil X-ray optics; hard X-ray imaging spectrometers covering 5-80 keV, located in the focal plane of multilayer-coated, focusing hard X-ray mirrors; a wide-field imaging spectrometer sensitive over 0.4-12 keV, with an X-ray CCD camera in the focal plane of a soft X-ray telescope; and a non-focusing Compton-camera type soft gamma-ray detector, sensitive in the 40-600 keV band. The simultaneous broad bandpass, coupled with high spectral resolution, will enable the pursuit of a wide variety of important science themes.Comment: 22 pages, 17 figures, Proceedings of the SPIE Astronomical Instrumentation "Space Telescopes and Instrumentation 2012: Ultraviolet to Gamma Ray
    corecore