4,844 research outputs found

    Development of Multi-Sensor Global Cloud and Radiance Composites for Earth Radiation Budget Monitoring from DSCOVR

    Get PDF
    The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe

    Building a dense surface map incrementally from semi-dense point cloud and RGBimages

    Full text link
    © 2015, Journal of Zhejiang University Science Editorial Office and Springer-Verlag Berlin Heidelberg. Building and using maps is a fundamental issue for bionic robots in field applications. A dense surface map, which offers rich visual and geometric information, is an ideal representation of the environment for indoor/outdoor localization, navigation, and recognition tasks of these robots. Since most bionic robots can use only small light-weight laser scanners and cameras to acquire semi-dense point cloud and RGB images, we propose a method to generate a consistent and dense surface map from this kind of semi-dense point cloud and RGB images. The method contains two main steps: (1) generate a dense surface for every single scan of point cloud and its corresponding image(s) and (2) incrementally fuse the dense surface of a new scan into the whole map. In step (1) edge-aware resampling is realized by segmenting the scan of a point cloud in advance and resampling each sub-cloud separately. Noise within the scan is reduced and a dense surface is generated. In step (2) the average surface is estimated probabilistically and the non-coincidence of different scans is eliminated. Experiments demonstrate that our method works well in both indoor and outdoor semi-structured environments where there are regularly shaped objects

    Data registration and integration requirements for severe storms research

    Get PDF
    Severe storms research is characterized by temporal scales ranging from minutes (for thunderstorms and tornadoes) to hours (for hurricanes and extra-tropical cyclones). Spatial scales range from tens to hundreds of kilometers. Sources of observational data include a variety of ground based and satellite systems. Requirements for registration and intercomparison of data from these various sources are examined and the potential for operational forecasting application of techniques resulting from the research is discussed. The sensor characteristics and processing procedures relating to the overlay and integrated analysis of satellite and surface observations for severe storms research are reviewed

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
    corecore