14 research outputs found

    Trends in Sighting Systems for Combat Vehicles

    Get PDF
    Search and tracking in dynamic condition, rapid re-targeting, precision pointing and long range engagement in day and night condition are core requisite of stabilised sighting systems used for combat vehicles. Complex battle field requires integrated fire control system with stabilised sighting system as its main constituent. It facilitates quick reaction to fire control system and provides vital edge in the battlefield scenario. Precision gimbal design, optics design, embedded engineering, control system, electro-optical sensors, target detection and tracking, panorama generation, auto-alerting, digital image stabilisation, image fusion and integration are important aspects of sighting system development. In this paper, design considerations for a state of art stabilised sighting system have been presented including laboratory and field evaluation methods for such systems

    Designing objective quality metrics for panoramic videos based on human perception

    Get PDF
    International audienceThe creation of high quality panoramic videos for immersive VR content is commonly done using a rig with multiple cameras covering the required scene. Unfortunately, this setup introduces both spatial and temporal artifacts due to the difference in optical centers as well as imperfect synchronization of cameras. Traditional image quality metrics are inadequate for describing geometric distortions in panoramic videos. In this paper, we propose an objective quality assessment approach for calculating these distortions based on optical flow coupled with an existing salience detection model. Our approach is validated with a human-centered study using error annotations and eye-tracking. Preliminary results indicate a good correlation between errors detected by the algorithm and human perception of errors

    An objective quality metric for panoramic videos

    Get PDF
    The creation of high quality panoramic videos for immersive VR content is commonly done using a rig with multiple cameras covering the required scene. Unfortunately, this setup introduces both spatial and temporal artifacts due to the difference in optical centers as well as the imperfect synchronization between them. Therefore, designing quality metrics to assess those videos is becoming increasingly important. Using traditional image quality metrics is not directly applicable due to the lack of a reference image. In addition, such metrics do not capture the geometric nature of such deformations. In this paper, we present a quality metric for panoramic video frames which works by computing pair-wise quality maps prior to blending and fusing them to obtain a global map of potential errors. Our metric is based on an existing one designed for novel view synthesized image, which is a similar problem to image stitching. Results show that applying this quality metric offers a practical way to assess panoramic video frames which do not have a reference. It also consolidates the similarity of the artifacts produced from novel view synthesis algorithms and those produced in the process of image and video stitching

    Consistent Video Filtering for Camera Arrays

    Get PDF
    International audienceVisual formats have advanced beyond single-view images and videos: 3D movies are commonplace, researchers have developed multi-view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses \emph{input} frame gradients as a reference to impose temporal and spatial consistency. Our least-squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per-frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines

    MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

    Get PDF
    We introduce a method to convert stereo 360{\deg} (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering. Stereo 360{\deg} imagery can be captured from multi-camera systems for virtual reality (VR), but lacks motion parallax and correct-in-all-directions disparity cues. Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth. However, this raises questions as to how to handle disoccluded regions in dynamic scenes. Our approach is to simultaneously learn depth and disocclusions via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR. This significantly improves comfort for the viewer, and can be inferred and rendered in real time on modern GPU hardware. Together, these move towards making VR video a more comfortable immersive medium.Comment: 25 pages, 13 figures, Published at European Conference on Computer Vision (ECCV 2020), Project Page: http://visual.cs.brown.edu/matryodshk

    Content-preserving image stitching with piecewise rectangular boundary constraints

    Get PDF
    This paper proposes an approach to content-preserving image stitching with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with a piecewise rectangular boundary. Existing methods treat image stitching and rectangling as two separate steps, which may result in suboptimal results as the stitching process is not aware of the further warping needs for rectangling. We address these limitations by formulating image stitching with regular boundaries in a unified optimization. Starting from the initial stitching results produced by the traditional warping-based optimization, we obtain the irregular boundary from the warped meshes by polygon Boolean operations which robustly handle arbitrary mesh compositions. By analyzing the irregular boundary, we construct a piecewise rectangular boundary. Based on this, we further incorporate line and regular boundary preservation constraints into the image stitching framework, and conduct iterative optimization to obtain an optimal piecewise rectangular boundary. Thus we can make the boundary of the stitching results as close as possible to a rectangle, while reducing unwanted distortions. We further extend our method to video stitching, by integrating the temporal coherence into the optimization. Experiments show that our method efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions
    corecore