327 research outputs found

    Video anatomy : spatial-temporal video profile

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview

    Deployment characterization of a floatable tidal energy converter on a tidal channel, Ria Formosa, Portugal

    Get PDF
    This paper presents the results of a pilot experiment with an existing tidal energy converter (TEC), Evopod 1 kW floatable prototype, in a real test case scenario (Faro Channel, Ria Formosa, Portugal). A baseline marine geophysical, hydrodynamic and ecological study based on the experience collected on the test site is presented. The collected data was used to validate a hydro-morphodynamic model, allowing the selection of the installation area based on both operational and environmental constraints. Operational results related to the description of power generation capacity, energy capture area and proportion of energy flux are presented and discussed, including the failures occurring during the experimental setup. The data is now available to the scientific community and to TEC industry developers, enhancing the operational knowledge of TEC technology concerning efficiency, environmental effects, and interactions (i.e. device/environment). The results can be used by developers on the licensing process, on overcoming the commercial deployment barriers, on offering extra assurance and confidence to investors, who traditionally have seen environmental concerns as a barrier, and on providing the foundations whereupon similar deployment areas can be considered around the world for marine tidal energy extraction.Acknowledgements The paper is a contribution to the SCORE project, funded by the Portuguese Foundation for Science and Technology (FCT e PTDC/ AAG-TEC/1710/2014). Andre Pacheco was supported by the Portu- guese Foundation for Science and Technology under the Portuguese Researchers' Programme 2014 entitled “Exploring new concepts for extracting energy from tides” (IF/00286/2014/CP1234). Eduardo GGorbena has received funding for the OpTiCA project from the ~ Marie Skłodowska-Curie Actions of the European Union's H2020- MSCA-IF-EF-RI-2016/under REA grant agreement n [748747]. The authors would like to thank to the Portuguese Maritime Authorities and Sofareia SA for their help on the deployment.info:eu-repo/semantics/publishedVersio

    Video alignment to a common reference

    Get PDF
    2015 Spring.Includes bibliographical references.Handheld videos often include unintentional motion (jitter) and intentional motion (pan and/or zoom). Human viewers prefer to see jitter removed, creating a smoothly moving camera. For video analysis, in contrast, aligning to a fixed stable background is sometimes preferable. This paper presents an algorithm that removes both forms of motion using a novel and efficient way of tracking background points while ignoring moving foreground points. The approach is related to image mosaicing, but the result is a video rather than an enlarged still image. It is also related to multiple object tracking approaches, but simpler since moving objects need not be explicitly tracked. The algorithm presented takes as input a video and returns one or several stabilized videos. Videos are broken into parts when the algorithm detects background change and it becomes necessary to fix upon a new background. We present two techniques in this thesis. One technique stabilizes the video with respect to the first available frame. Another technique stabilizes the videos with respect to a best frame. Our approach assumes the person holding the camera is standing in one place and that objects in motion do not dominate the image. Our algorithm performs better than previously published approaches when compared on 1,401 handheld videos from the recently released Point-and-Shoot Face Recognition Challenge (PASC)

    CAMHID: Camera motion histogram descriptor and its application to cinematographic shot classification

    Full text link
    © 1991-2012 IEEE. In this paper, we propose a nonparametric camera motion descriptor for video shot classification. In the proposed method, a motion vector field (MVF) is constructed for each consecutive video frame by computing the motion vector (MV) of each macroblock. Then, the MVFs are divided into a number of local region of equal size. Next, the inconsistent/noisy MVs of each local region are eliminated by a motion consistency analysis. The remaining MVs of each local region from a number of consecutive frames are further collected for a compact representation. Initially, a matrix is formed using the MVs. Then, the matrix is decomposed using a singular value decomposition technique to represent the dominant motion. Finally, the angle of the most variance retaining principal component is computed and quantized to represent the motion of a local region by using a histogram. In order to represent the global camera motion, the local histograms are combined. The effectiveness of the proposed motion descriptor for video shot classification is tested by using a support vector machine. First, the proposed camera motion descriptors for video shots classification are computed on a video data set consisting of regular camera motion patterns (e.g., pan, zoom, tilt, static). Then, we apply the camera motion descriptors with an extended set of features to the classification of cinematographic shots. The experimental results show that the proposed shot level camera motion descriptor has a strong discriminative capability to classify different camera motion patterns of different videos effectively. We also show that our approach outperforms state-of-the-art methods

    Spherical Image Processing for Immersive Visualisation and View Generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated
    • …
    corecore