84,995 research outputs found

    Breaking new ground in mapping human settlements from space -The Global Urban Footprint-

    Full text link
    Today 7.2 billion people inhabit the Earth and by 2050 this number will have risen to around nine billion, of which about 70 percent will be living in cities. Hence, it is essential to understand drivers, dynamics, and impacts of the human settlements development. A key component in this context is the availability of an up-to-date and spatially consistent map of the location and distribution of human settlements. It is here that the Global Urban Footprint (GUF) raster map can make a valuable contribution. The new global GUF binary settlement mask shows a so far unprecedented spatial resolution of 0.4 arcsec (12m\sim12 m) that provides - for the first time - a complete picture of the entirety of urban and rural settlements. The GUF has been derived by means of a fully automated processing framework - the Urban Footprint Processor (UFP) - that was used to analyze a global coverage of more than 180,000 TanDEM-X and TerraSAR-X radar images with 3m ground resolution collected in 2011-2012. Various quality assessment studies to determine the absolute GUF accuracy based on ground truth data on the one hand and the relative accuracies compared to established settlements maps on the other hand, clearly indicate the added value of the new global GUF layer, in particular with respect to the representation of rural settlement patterns. Generally, the GUF layer achieves an overall absolute accuracy of about 85\%, with observed minima around 65\% and maxima around 98 \%. The GUF will be provided open and free for any scientific use in the full resolution and for any non-profit (but also non-scientific) use in a generalized version of 2.8 arcsec (84m\sim84m). Therewith, the new GUF layer can be expected to break new ground with respect to the analysis of global urbanization and peri-urbanization patterns, population estimation or vulnerability assessment

    An Automatic Level Set Based Liver Segmentation from MRI Data Sets

    Get PDF
    A fast and accurate liver segmentation method is a challenging work in medical image analysis area. Liver segmentation is an important process for computer-assisted diagnosis, pre-evaluation of liver transplantation and therapy planning of liver tumors. There are several advantages of magnetic resonance imaging such as free form ionizing radiation and good contrast visualization of soft tissue. Also, innovations in recent technology and image acquisition techniques have made magnetic resonance imaging a major tool in modern medicine. However, the use of magnetic resonance images for liver segmentation has been slow when we compare applications with the central nervous systems and musculoskeletal. The reasons are irregular shape, size and position of the liver, contrast agent effects and similarities of the gray values of neighbor organs. Therefore, in this study, we present a fully automatic liver segmentation method by using an approximation of the level set based contour evolution from T2 weighted magnetic resonance data sets. The method avoids solving partial differential equations and applies only integer operations with a two-cycle segmentation algorithm. The efficiency of the proposed approach is achieved by applying the algorithm to all slices with a constant number of iteration and performing the contour evolution without any user defined initial contour. The obtained results are evaluated with four different similarity measures and they show that the automatic segmentation approach gives successful results

    Optimizing feature extraction in image analysis using experimented designs, a case study evaluating texture algorithms for describing appearance retention in carpets

    Get PDF
    When performing image analysis, one of the most critical steps is the selection of appropriate techniques. A huge amount of features can be extracted from several techniques and the selection is commonly performed based on expert knowledge. In this paper we present the theory of experimental designs as a tool for an objective selection of techniques in image analysis domain. We present a study case for evaluating appearance retention in textile floor coverings using texture features. The use of experimental design theory permitted to select an optimal set of techniques for describing the texture changes due to degradation

    A reduced-reference perceptual image and video quality metric based on edge preservation

    Get PDF
    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al
    corecore