9 research outputs found

    Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction

    Get PDF
    This paper proposes a pseudo-dolly-in video generation method that reproduces motion parallax by applying image reconstruction processing to multi-view videos. Since dolly-in video is taken by moving a camera forward to reproduce motion parallax, we can present a sense of immersion. However, at a sporting event in a large-scale space, moving a camera is difficult. Our research generates dolly-in video from multi-view images captured by fixed cameras. By applying the Image-Based Modeling technique, dolly-in video can be generated. Unfortunately, the video quality is often damaged by the 3D estimation error. On the other hand, Bullet-Time realizes high-quality video observation. However, moving the virtual-viewpoint from the capturing positions is difficult. To solve these problems, we propose a method to generate a pseudo-dolly-in image by installing 3D estimation and image reconstruction techniques into Bullet-Time and show its effectiveness by applying it to multi-view videos captured at an actual soccer stadium. In the experiment, we compared the proposed method with digital zoom images and with the dolly-in video generated from the Image-Based Modeling and Rendering method.Published in: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) Date of Conference: 9-13 Oct. 2017 Conference Location: Nantes, Franc

    An Iterative Co-Saliency Framework for RGBD Images

    Full text link
    As a newly emerging and significant topic in computer vision community, co-saliency detection aims at discovering the common salient objects in multiple related images. The existing methods often generate the co-saliency map through a direct forward pipeline which is based on the designed cues or initialization, but lack the refinement-cycle scheme. Moreover, they mainly focus on RGB image and ignore the depth information for RGBD images. In this paper, we propose an iterative RGBD co-saliency framework, which utilizes the existing single saliency maps as the initialization, and generates the final RGBD cosaliency map by using a refinement-cycle model. Three schemes are employed in the proposed RGBD co-saliency framework, which include the addition scheme, deletion scheme, and iteration scheme. The addition scheme is used to highlight the salient regions based on intra-image depth propagation and saliency propagation, while the deletion scheme filters the saliency regions and removes the non-common salient regions based on interimage constraint. The iteration scheme is proposed to obtain more homogeneous and consistent co-saliency map. Furthermore, a novel descriptor, named depth shape prior, is proposed in the addition scheme to introduce the depth information to enhance identification of co-salient objects. The proposed method can effectively exploit any existing 2D saliency model to work well in RGBD co-saliency scenarios. The experiments on two RGBD cosaliency datasets demonstrate the effectiveness of our proposed framework.Comment: 13 pages, 13 figures, Accepted by IEEE Transactions on Cybernetics 2017. Project URL: https://rmcong.github.io/proj_RGBD_cosal_tcyb.htm

    Robust Stereoscopic Crosstalk Prediction

    Get PDF
    We propose a new metric to predict perceived crosstalk using the original images rather than both the original and ghosted images. The proposed metrics are based on color information. First, we extract a disparity map, a color difference map, and a color contrast map from original image pairs. Then, we use those maps to construct two new metrics (Vdispc and Vdlogc). Metric Vdispc considers the effect of the disparity map and the color difference map, while Vdlogc addresses the influence of the color contrast map. The prediction performance is evaluated using various types of stereoscopic crosstalk images. By incorporating Vdispc and Vdlogc, the new metric Vpdlc is proposed to achieve a higher correlation with the perceived subject crosstalk scores. Experimental results show that the new metrics achieve better performance than previous methods, which indicate that color information is one key factor for crosstalk visible prediction. Furthermore, we construct a new data set to evaluate our new metrics

    Video Saliency Detection Using Object Proposals

    Get PDF
    In this paper, we introduce a novel approach to identify salient object regions in videos via object proposals. The core idea is to solve the saliency detection problem by ranking and selecting the salient proposals based on object-level saliency cues. Object proposals offer a more complete and high-level representation, which naturally caters to the needs of salient object detection. As well as introducing this novel solution for video salient object detection, we reorganize various discriminative saliency cues and traditional saliency assumptions on object proposals. With object candidates, a proposal ranking and voting scheme, based on various object-level saliency cues, is designed to screen out nonsalient parts, select salient object regions, and to infer an initial saliency estimate. Then a saliency optimization process that considers temporal consistency and appearance differences between salient and nonsalient regions is used to refine the initial saliency estimates. Our experiments on public datasets (SegTrackV2, Freiburg-Berkeley Motion Segmentation Dataset, and Densely Annotated Video Segmentation) validate the effectiveness, and the proposed method produces significant improvements over state-of-the-art algorithms

    A deep evaluator for image retargeting quality by geometrical and contextual interaction

    Get PDF
    An image is compressed or stretched during the multidevice displaying, which will have a very big impact on perception quality. In order to solve this problem, a variety of image retargeting methods have been proposed for the retargeting process. However, how to evaluate the results of different image retargeting is a very critical issue. In various application systems, the subjective evaluation method cannot be applied on a large scale. So we put this problem in the accurate objective-quality evaluation. Currently, most of the image retargeting quality assessment algorithms use simple regression methods as the last step to obtain the evaluation result, which are not corresponding with the perception simulation in the human vision system (HVS). In this paper, a deep quality evaluator for image retargeting based on the segmented stacked AutoEnCoder (SAE) is proposed. Through the help of regularization, the designed deep learning framework can solve the overfitting problem. The main contributions in this framework are to simulate the perception of retargeted images in HVS. Especially, it trains two separated SAE models based on geometrical shape and content matching. Then, the weighting schemes can be used to combine the obtained scores from two models. Experimental results in three well-known databases show that our method can achieve better performance than traditional methods in evaluating different image retargeting results

    Efficient Depth-aware Image Deformation Adaptation for Curved Screen Displays

    Full text link

    Real-time superpixel segmentation by DBSCAN clustering algorithm

    Get PDF
    In this paper, we propose a real-time image superpixel segmentation method with 50 frames/s by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. In order to decrease the computational costs of superpixel algorithms, we adopt a fast two-step framework. In the first clustering stage, the DBSCAN algorithm with color-similarity and geometric restrictions is used to rapidly cluster the pixels, and then, small clusters are merged into superpixels by their neighborhood through a distance measurement defined by color and spatial features in the second merging stage. A robust and simple distance function is defined for obtaining better superpixels in these two steps. The experimental results demonstrate that our real-time superpixel algorithm (50 frames/s) by the DBSCAN clustering outperforms the state-of-the-art superpixel segmentation methods in terms of both accuracy and efficiency
    corecore