202 research outputs found

    Generating a full spherical view bymodeling the relation between two fisheye images

    Get PDF
    Full spherical views provide advantages in many applications that use visual information. Dual back-to-back fisheye cameras are receiving much attention to obtain this type of view. However, obtaining a high-quality full spherical view is very challenging. In this paper, we propose a correction step that models the relation between the pixels of the pair of fisheye images in polar coordinates. This correction is implemented during the mapping from the unit sphere to the fisheye image using the equidistant fisheye projection. The objective is that the projections of the same point in the pair of images have the same position on the unit sphere after the correction. In this way, they will also have the same position on the equirectangular coordinate system. Consequently, the discontinuity between the spherical views for blending is minimized. Throughout the manuscript, we show that the angular polar coordinates of the same scene point in the fisheye images are related by a sine function and the radial distance coordinates by a linear function. Also, we propose employing a polynomial as a geometric transformation between the pair of spherical views during the image alignment since the relationship between the matching points of pairs of spherical views is not linear, especially in the top/bottom regions. Quantitative evaluations demonstrate that using the correction step improves the quality of the full spherical view, i.e. IQ MS-SSIM, up to 7%. Similarly, using a polynomial improves the IQ MS-SSIM up to 6.29% with respect to using an affine matrix

    Blind Omnidirectional Image Quality Assessment with Viewport Oriented Graph Convolutional Networks

    Full text link
    Quality assessment of omnidirectional images has become increasingly urgent due to the rapid growth of virtual reality applications. Different from traditional 2D images and videos, omnidirectional contents can provide consumers with freely changeable viewports and a larger field of view covering the 360×180360^{\circ}\times180^{\circ} spherical surface, which makes the objective quality assessment of omnidirectional images more challenging. In this paper, motivated by the characteristics of the human vision system (HVS) and the viewing process of omnidirectional contents, we propose a novel Viewport oriented Graph Convolution Network (VGCN) for blind omnidirectional image quality assessment (IQA). Generally, observers tend to give the subjective rating of a 360-degree image after passing and aggregating different viewports information when browsing the spherical scenery. Therefore, in order to model the mutual dependency of viewports in the omnidirectional image, we build a spatial viewport graph. Specifically, the graph nodes are first defined with selected viewports with higher probabilities to be seen, which is inspired by the HVS that human beings are more sensitive to structural information. Then, these nodes are connected by spatial relations to capture interactions among them. Finally, reasoning on the proposed graph is performed via graph convolutional networks. Moreover, we simultaneously obtain global quality using the entire omnidirectional image without viewport sampling to boost the performance according to the viewing experience. Experimental results demonstrate that our proposed model outperforms state-of-the-art full-reference and no-reference IQA metrics on two public omnidirectional IQA databases

    Graph-Based Detection of Seams In 360-Degree Images

    Get PDF
    In this paper, we propose an algorithm to detect a specific kind of distortions, referred to as seams, which commonly oc- cur when a 360-degree image is represented in planar domain by projecting the sphere to a polyhedron, e.g, via the Cube Map (CM) projection, and undergoes lossy compression. The proposed algorithm exploits a graph-based representation to account for the actual sampling density of the 360-degree sig- nal in the native spherical domain. The CM image is con- sidered as a signal lying on a graph defined on the spherical surface. The spectra of the processed and the original sig- nals, computed by applying the Graph Fourier Transform, are compared to detect the seams. To test our method a dataset of compressed CM 360-degree images, annotated by experts, has been created. The performance of the proposed algorithm is compared to those achieved by baseline metrics, as well as to the same approach based on spectral comparison but ignor- ing the spherical nature of the signal. The experimental results show that the proposed method has the best performance and can successfully detect up to approximately 90% of visible seams on our dataset

    Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System

    Get PDF
    Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes

    Capturing 3D textured inner pipe surfaces for sewer inspection

    Get PDF
    Inspection robots equipped with TV camera technology are commonly used to detect defects in sewer systems. Currently, these defects are predominantly identified by human assessors, a process that is not only time-consuming and costly but also susceptible to errors. Furthermore, existing systems primarily offer only information from 2D imaging for damage assessment, limiting the accurate identification of certain types of damage due to the absence of 3D information. Thus, the necessary solid quantification and characterisation of damage, which is needed to evaluate remediation measures and the associated costs, is limited from the sensory side. In this paper, we introduce an innovative system designed for acquiring multimodal image data using a camera measuring head capable of capturing both color and 3D images with high accuracy and temporal availability based on the single-shot principle. This sensor head, affixed to a carriage, continuously captures the sewer's inner wall during transit. The collected data serves as the basis for an AI-based automatic analysis of pipe damages as part of the further assessment and monitoring of sewers. Moreover, this paper is focused on the fundamental considerations about the design of the multimodal measuring head and elaborates on some application-specific implementation details. These include data pre-processing, 3D reconstruction, registration of texture and depth images, as well as 2D-3D registration and 3D image fusion

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    3D Scene Geometry Estimation from 360^\circ Imagery: A Survey

    Full text link
    This paper provides a comprehensive survey on pioneer and state-of-the-art 3D scene geometry estimation methodologies based on single, two, or multiple images captured under the omnidirectional optics. We first revisit the basic concepts of the spherical camera model, and review the most common acquisition technologies and representation formats suitable for omnidirectional (also called 360^\circ, spherical or panoramic) images and videos. We then survey monocular layout and depth inference approaches, highlighting the recent advances in learning-based solutions suited for spherical data. The classical stereo matching is then revised on the spherical domain, where methodologies for detecting and describing sparse and dense features become crucial. The stereo matching concepts are then extrapolated for multiple view camera setups, categorizing them among light fields, multi-view stereo, and structure from motion (or visual simultaneous localization and mapping). We also compile and discuss commonly adopted datasets and figures of merit indicated for each purpose and list recent results for completeness. We conclude this paper by pointing out current and future trends.Comment: Published in ACM Computing Survey

    A backpack-mounted omnidirectional camera with off-the-shelf navigation sensors for mobile terrestrial mapping: Development and forest application

    Get PDF
    The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. View Full-Text Keywords: personal mobile terrestrial system; omnidirectional cameras; low-cost sensors; forest mapping; PMTS data quality </div
    corecore