2,166 research outputs found

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style

    The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery

    Get PDF
    peer-reviewedIrish Journal of Agricultural and Food Research | Volume 58: Issue 1 The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery R. O’Haraemail , S. Green and T. McCarthy DOI: https://doi.org/10.2478/ijafr-2019-0006 | Published online: 11 Oct 2019 PDF Abstract Article PDF References Recommendations Abstract The capability of Sentinel 1 C-band (5 cm wavelength) synthetic aperture radio detection and ranging (RADAR) (abbreviated as SAR) for flood mapping is demonstrated, and this approach is used to map the extent of the extensive floods that occurred throughout the Republic of Ireland in the winter of 2015–2016. Thirty-three Sentinel 1 images were used to map the area and duration of floods over a 6-mo period from November 2015 to April 2016. Flood maps for 11 separate dates charted the development and persistence of floods nationally. The maximum flood extent during this period was estimated to be ~24,356 ha. The depth of rainfall influenced the magnitude of flood in the preceding 5 d and over more extended periods to a lesser degree. Reduced photosynthetic activity on farms affected by flooding was observed in Landsat 8 vegetation index difference images compared to the previous spring. The accuracy of the flood map was assessed against reports of flooding from affected farms, as well as other satellite-derived maps from Copernicus Emergency Management Service and Sentinel 2. Monte Carlo simulated elevation data (20 m resolution, 2.5 m root mean square error [RMSE]) were used to estimate the flood’s depth and volume. Although the modelled flood height showed a strong correlation with the measured river heights, differences of several metres were observed. Future mapping strategies are discussed, which include high–temporal-resolution soil moisture data, as part of an integrated multisensor approach to flood response over a range of spatial scales

    Robotic Cameraman for Augmented Reality based Broadcast and Demonstration

    Get PDF
    In recent years, a number of large enterprises have gradually begun to use vari-ous Augmented Reality technologies to prominently improve the audiences’ view oftheir products. Among them, the creation of an immersive virtual interactive scenethrough the projection has received extensive attention, and this technique refers toprojection SAR, which is short for projection spatial augmented reality. However,as the existing projection-SAR systems have immobility and limited working range,they have a huge difficulty to be accepted and used in human daily life. Therefore,this thesis research has proposed a technically feasible optimization scheme so thatit can be practically applied to AR broadcasting and demonstrations. Based on three main techniques required by state-of-art projection SAR applica-tions, this thesis has created a novel mobile projection SAR cameraman for ARbroadcasting and demonstration. Firstly, by combining the CNN scene parsingmodel and multiple contour extractors, the proposed contour extraction pipelinecan always detect the optimal contour information in non-HD or blurred images.This algorithm reduces the dependency on high quality visual sensors and solves theproblems of low contour extraction accuracy in motion blurred images. Secondly, aplane-based visual mapping algorithm is introduced to solve the difficulties of visualmapping in these low-texture scenarios. Finally, a complete process of designing theprojection SAR cameraman robot is introduced. This part has solved three mainproblems in mobile projection-SAR applications: (i) a new method for marking con-tour on projection model is proposed to replace the model rendering process. Bycombining contour features and geometric features, users can identify objects oncolourless model easily. (ii) a camera initial pose estimation method is developedbased on visual tracking algorithms, which can register the start pose of robot to thewhole scene in Unity3D. (iii) a novel data transmission approach is introduced to establishes a link between external robot and the robot in Unity3D simulation work-space. This makes the robotic cameraman can simulate its trajectory in Unity3D simulation work-space and project correct virtual content. Our proposed mobile projection SAR system has made outstanding contributionsto the academic value and practicality of the existing projection SAR technique. Itfirstly solves the problem of limited working range. When the system is running ina large indoor scene, it can follow the user and project dynamic interactive virtualcontent automatically instead of increasing the number of visual sensors. Then,it creates a more immersive experience for audience since it supports the user hasmore body gestures and richer virtual-real interactive plays. Lastly, a mobile systemdoes not require up-front frameworks and cheaper and has provided the public aninnovative choice for indoor broadcasting and exhibitions

    Generation of a Combined Dataset of Simulated Radar and EO/IR Imagery

    Get PDF
    In the world of remote sensing, both radar and EO/IR (electro-optical/infrared) sensors carry with them unique information useful to the imaging community. Radar has the capability of imaging through all types of weather, day or night. EO/IR produces radiance maps and frequently images at much finer resolution than radar. While each of these systems is valuable to imaging, there exists unknown territory in the imaging community as to the value added in combining the best of both these worlds. This work will begin to explore the challenges in simulating a scene in both a radar tool called Xpatch and an EO/IR tool called DIRSIG (Digital Imaging and Remote Sensing Image Generation). The capabilities and limitations inherent to both radar and EO/IR are similar in the image simulation tools, so the work done in a simulated environment will carry over to the real-world environment as well. The goal of this effort is to demonstrate an environment where EO/IR and radar images of common scenes can be simulated. Once demonstrated, this environment would be used to facilitate trade studies of various multi-sensor instrument design and exploitation algorithm concepts. The synthetic data generated will be compared to existing measured data to demonstrate the validity of the experiment

    Analysis of Polarimetric Synthetic Aperture Radar and Passive Visible Light Polarimetric Imaging Data Fusion for Remote Sensing Applications

    Get PDF
    The recent launch of spaceborne (TerraSAR-X, RADARSAT-2, ALOS-PALSAR, RISAT) and airborne (SIRC, AIRSAR, UAVSAR, PISAR) polarimetric radar sensors, with capability of imaging through day and night in almost all weather conditions, has made polarimetric synthetic aperture radar (PolSAR) image interpretation and analysis an active area of research. PolSAR image classification is sensitive to object orientation and scattering properties. In recent years, significant work has been done in many areas including agriculture, forestry, oceanography, geology, terrain analysis. Visible light passive polarimetric imaging has also emerged as a powerful tool in remote sensing for enhanced information extraction. The intensity image provides information on materials in the scene while polarization measurements capture surface features, roughness, and shading, often uncorrelated with the intensity image. Advantages of visible light polarimetric imaging include high dynamic range of polarimetric signatures and being comparatively straightforward to build and calibrate. This research is about characterization and analysis of the basic scattering mechanisms for information fusion between PolSAR and passive visible light polarimetric imaging. Relationships between these two modes of imaging are established using laboratory measurements and image simulations using the Digital Image and Remote Sensing Image Generation (DIRSIG) tool. A novel low cost laboratory based S-band (2.4GHz) PolSAR instrument is developed that is capable of capturing 4 channel fully polarimetric SAR image data. Simple radar targets are formed and system calibration is performed in terms of radar cross-section. Experimental measurements are done using combination of the PolSAR instrument with visible light polarimetric imager for scenes capturing basic scattering mechanisms for phenomenology studies. The three major scattering mechanisms studied in this research include single, double and multiple bounce. Single bounce occurs from flat surfaces like lakes, rivers, bare soil, and oceans. Double bounce can be observed from two adjacent surfaces where one horizontal flat surface is near a vertical surface such as buildings and other vertical structures. Randomly oriented scatters in homogeneous media produce a multiple bounce scattering effect which occurs in forest canopies and vegetated areas. Relationships between Pauli color components from PolSAR and Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging are established using real measurements. Results show higher values of the red channel in Pauli color image (|HH-VV|) correspond to high DOLP from double bounce effect. A novel information fusion technique is applied to combine information from the two modes. In this research, it is demonstrated that the Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging can be used for separation of the classes in terms of scattering mechanisms from the PolSAR data. The separation of these three classes in terms of the scattering mechanisms has its application in the area of land cover classification and anomaly detection. The fusion of information from these particular two modes of imaging, i.e. PolSAR and passive visible light polarimetric imaging, is a largely unexplored area in remote sensing and the main challenge in this research is to identify areas and scenarios where information fusion between the two modes is advantageous for separation of the classes in terms of scattering mechanisms relative to separation achieved with only PolSAR
    • …
    corecore