70,823 research outputs found
Breaking new ground in mapping human settlements from space -The Global Urban Footprint-
Today 7.2 billion people inhabit the Earth and by 2050 this number will have
risen to around nine billion, of which about 70 percent will be living in
cities. Hence, it is essential to understand drivers, dynamics, and impacts of
the human settlements development. A key component in this context is the
availability of an up-to-date and spatially consistent map of the location and
distribution of human settlements. It is here that the Global Urban Footprint
(GUF) raster map can make a valuable contribution. The new global GUF binary
settlement mask shows a so far unprecedented spatial resolution of 0.4 arcsec
() that provides - for the first time - a complete picture of the
entirety of urban and rural settlements. The GUF has been derived by means of a
fully automated processing framework - the Urban Footprint Processor (UFP) -
that was used to analyze a global coverage of more than 180,000 TanDEM-X and
TerraSAR-X radar images with 3m ground resolution collected in 2011-2012.
Various quality assessment studies to determine the absolute GUF accuracy based
on ground truth data on the one hand and the relative accuracies compared to
established settlements maps on the other hand, clearly indicate the added
value of the new global GUF layer, in particular with respect to the
representation of rural settlement patterns. Generally, the GUF layer achieves
an overall absolute accuracy of about 85\%, with observed minima around 65\%
and maxima around 98 \%. The GUF will be provided open and free for any
scientific use in the full resolution and for any non-profit (but also
non-scientific) use in a generalized version of 2.8 arcsec ().
Therewith, the new GUF layer can be expected to break new ground with respect
to the analysis of global urbanization and peri-urbanization patterns,
population estimation or vulnerability assessment
Real-time food intake classification and energy expenditure estimation on a mobile device
© 2015 IEEE.Assessment of food intake has a wide range of applications in public health and life-style related chronic disease management. In this paper, we propose a real-time food recognition platform combined with daily activity and energy expenditure estimation. In the proposed method, food recognition is based on hierarchical classification using multiple visual cues, supported by efficient software implementation suitable for realtime mobile device execution. A Fischer Vector representation together with a set of linear classifiers are used to categorize food intake. Daily energy expenditure estimation is achieved by using the built-in inertial motion sensors of the mobile device. The performance of the vision-based food recognition algorithm is compared to the current state-of-the-art, showing improved accuracy and high computational efficiency suitable for realtime feedback. Detailed user studies have also been performed to demonstrate the practical value of the software environment
Optimizing feature extraction in image analysis using experimented designs, a case study evaluating texture algorithms for describing appearance retention in carpets
When performing image analysis, one of the most critical steps is the selection of appropriate techniques. A huge amount of features can be extracted from several techniques and the selection is commonly performed based on expert knowledge. In this paper we present the theory of experimental designs as a tool for an objective selection of techniques in image analysis domain. We present a study case for evaluating appearance retention in textile floor coverings using texture features. The use of experimental design theory permitted to select an optimal set of techniques for describing the texture changes due to degradation
A Quantitative Assessment of Forest Cover Change in the Moulouya River Watershed (Morocco) by the Integration of a Subpixel-Based and Object-Based Analysis of Landsat Data
A quantitative assessment of forest cover change in the Moulouya River watershed (Morocco) was carried out by means of an innovative approach from atmospherically corrected reflectance Landsat images corresponding to 1984 (Landsat 5 Thematic Mapper) and 2013 (Landsat 8 Operational Land Imager). An object-based image analysis (OBIA) was undertaken to classify segmented objects as forested or non-forested within the 2013 Landsat orthomosaic. A Random Forest classifier was applied to a set of training data based on a features vector composed of different types of object features such as vegetation indices, mean spectral values and pixel-based fractional cover derived from probabilistic spectral mixture analysis). The very high spatial resolution image data of Google Earth 2013 were employed to train/validate the Random Forest classifier, ranking the NDVI vegetation index and the corresponding pixel-based percentages of photosynthetic vegetation and bare soil as the most statistically significant object features to extract forested and non-forested areas. Regarding classification accuracy, an overall accuracy of 92.34% was achieved. The previously developed classification scheme was applied to the 1984 Landsat data to extract the forest cover change between 1984 and 2013, showing a slight net increase of 5.3% (ca. 8800 ha) in forested areas for the whole region
The Perception-Distortion Tradeoff
Image restoration algorithms are typically evaluated by some distortion
measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify
perceived perceptual quality. In this paper, we prove mathematically that
distortion and perceptual quality are at odds with each other. Specifically, we
study the optimal probability for correctly discriminating the outputs of an
image restoration algorithm from real images. We show that as the mean
distortion decreases, this probability must increase (indicating worse
perceptual quality). As opposed to the common belief, this result holds true
for any distortion measure, and is not only a problem of the PSNR or SSIM
criteria. We also show that generative-adversarial-nets (GANs) provide a
principled way to approach the perception-distortion bound. This constitutes
theoretical support to their observed success in low-level vision tasks. Based
on our analysis, we propose a new methodology for evaluating image restoration
methods, and use it to perform an extensive comparison between recent
super-resolution algorithms.Comment: CVPR 2018 (long oral presentation), see talk at:
https://youtu.be/_aXbGqdEkjk?t=39m43
Recommended from our members
A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain
Most of the existing 3D video quality assessment (3D-VQA/SVQA) methods only consider spatial information by directly using an image quality evaluation method. In addition, a few take the motion information of adjacent frames into consideration. In practice, one may assume that a single data-view is unlikely to be sufficient for effectively learning the video quality. Therefore, integration of multi-view information is both valuable and necessary. In this paper, we propose an effective multi-view feature learning metric for blind stereoscopic video quality assessment (BSVQA), which jointly focuses on spatial information, temporal information and inter-frame spatio-temporal information. In our study, a set of local binary patterns (LBP) statistical features extracted from a computed frame curvelet representation are used as spatial and spatio-temporal description, and the local flow statistical features based on the estimation of optical flow are used to describe the temporal distortion. Subsequently, a support vector regression (SVR) is utilized to map the feature vectors of each single view to subjective quality scores. Finally, the scores of multiple views are pooled into the final score according to their contribution rate. Experimental results demonstrate that the proposed metric significantly outperforms the existing metrics and can achieve higher consistency with subjective quality assessment
- …