102,453 research outputs found

    T2* assessment of the three coronary artery territories of the left ventricular wall by different monoexponential truncation methods

    Get PDF
    OBJECTIVES: This study aimed at evaluating left ventricular myocardial pixel-wise T2* using two truncation methods for different iron deposition T2* ranges and comparison of segmental T2* in different coronary artery territories. MATERIAL AND METHODS: Bright blood multi-gradient echo data of 30 patients were quantified by pixel-wise monoexponential T2* fitting with its R2 and SNR truncation. T2* was analyzed at different iron classifications. At low iron classification, T2* values were also analyzed by coronary artery territories. RESULTS: The right coronary artery has a significantly higher T2* value than the other coronary artery territories. No significant difference was found in classifying severe iron by the two truncation methods in any myocardial region, whereas in moderate iron, it is only apparent at septal segments. The R2 truncation produces a significantly higher T2* value than the SNR method when low iron is indicated. CONCLUSION: Clear T2* differentiation between the three coronary territories by the two truncation methods is demonstrated. The two truncation methods can be used interchangeably in classifying severe and moderate iron deposition at the recommended septal region. However, in patients with low iron indication, different results by the two truncation methods can mislead the investigation of early iron level progression

    GETNET: A General End-to-end Two-dimensional CNN Framework for Hyperspectral Image Change Detection

    Full text link
    Change detection (CD) is an important application of remote sensing, which provides timely change information about large-scale Earth surface. With the emergence of hyperspectral imagery, CD technology has been greatly promoted, as hyperspectral data with the highspectral resolution are capable of detecting finer changes than using the traditional multispectral imagery. Nevertheless, the high dimension of hyperspectral data makes it difficult to implement traditional CD algorithms. Besides, endmember abundance information at subpixel level is often not fully utilized. In order to better handle high dimension problem and explore abundance information, this paper presents a General End-to-end Two-dimensional CNN (GETNET) framework for hyperspectral image change detection (HSI-CD). The main contributions of this work are threefold: 1) Mixed-affinity matrix that integrates subpixel representation is introduced to mine more cross-channel gradient features and fuse multi-source information; 2) 2-D CNN is designed to learn the discriminative features effectively from multi-source data at a higher level and enhance the generalization ability of the proposed CD algorithm; 3) A new HSI-CD data set is designed for the objective comparison of different methods. Experimental results on real hyperspectral data sets demonstrate the proposed method outperforms most of the state-of-the-arts

    Land Cover Change Image Analysis for Assateague Island National Seashore Following Hurricane Sandy

    Get PDF
    The assessment of storm damages is critically important if resource managers are to understand the impacts of weather pattern changes and sea level rise on their lands and develop management strategies to mitigate its effects. This study was performed to detect land cover change on Assateague Island as a result of Hurricane Sandy. Several single-date classifications were performed on the pre and post hurricane imagery utilized using both a pixel-based and object-based approach with the Random Forest classifier. Univariate image differencing and a post classification comparison were used to conduct the change detection. This study found that the addition of the coastal blue band to the Landsat 8 sensor did not improve classification accuracy and there was also no statistically significant improvement in classification accuracy using Landsat 8 compared to Landsat 5. Furthermore, there was no significant difference found between object-based and pixel-based classification. Change totals were estimated on Assateague Island following Hurricane Sandy and were found to be minimal, occurring predominately in the most active sections of the island in terms of land cover change, however, the post classification detected significantly more change, mainly due to classification errors in the single-date maps used

    Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones

    Get PDF
    This paper proposes a novel framework for fusing multi-temporal, multispectral satellite images and OpenStreetMap (OSM) data for the classification of local climate zones (LCZs). Feature stacking is the most commonly-used method of data fusion but does not consider the heterogeneity of multimodal optical images and OSM data, which becomes its main drawback. The proposed framework processes two data sources separately and then combines them at the model level through two fusion models (the landuse fusion model and building fusion model), which aim to fuse optical images with landuse and buildings layers of OSM data, respectively. In addition, a new approach to detecting building incompleteness of OSM data is proposed. The proposed framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion Contest, and further validated on one additional test set containing test samples which are manually labeled in Munich and New York. Experimental results have indicated that compared to the feature stacking-based baseline framework the proposed framework is effective in fusing optical images with OSM data for the classification of LCZs with high generalization capability on a large scale. The classification accuracy of the proposed framework outperforms the baseline framework by more than 6% and 2%, while testing on the test set of 2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively. In addition, the proposed framework is less sensitive to spectral diversities of optical satellite images and thus achieves more stable classification performance than state-of-the art frameworks.Comment: accepted by TGR

    FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference

    Full text link
    The main obstacle to weakly supervised semantic image segmentation is the difficulty of obtaining pixel-level information from coarse image-level annotations. Most methods based on image-level annotations use localization maps obtained from the classifier, but these only focus on the small discriminative parts of objects and do not capture precise boundaries. FickleNet explores diverse combinations of locations on feature maps created by generic deep neural networks. It selects hidden units randomly and then uses them to obtain activation scores for image classification. FickleNet implicitly learns the coherence of each location in the feature maps, resulting in a localization map which identifies both discriminative and other parts of objects. The ensemble effects are obtained from a single network by selecting random hidden unit pairs, which means that a variety of localization maps are generated from a single image. Our approach does not require any additional training steps and only adds a simple layer to a standard convolutional neural network; nevertheless it outperforms recent comparable techniques on the Pascal VOC 2012 benchmark in both weakly and semi-supervised settings.Comment: To appear in CVPR 201

    Region-based Skin Color Detection.

    Get PDF
    Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on the individual pixels. This paper presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection method on the popular Compaq dataset (Jones and Rehg, 2002). Color and spatial distance based clustering technique is used to extract the regions from the images, also known as superpixels. In the first step, our technique uses the state-of-the-art non-parametric pixel-based skin color classifier (Jones and Rehg, 2002) which we call the basic skin color classifier. The pixel-based skin color evidence is then aggregated to classify the superpixels. Finally, the Conditional Random Field (CRF) is applied to further improve the results. As CRF operates over superpixels, the computational overhead is minimal. Our technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images
    corecore