7,179 research outputs found

    Evaluation of Sub-Selection Methods for Assessing Climate Change Impacts on Low-Flow and Hydrological Drought Conditions

    Get PDF
    A challenge for climate impact studies is the identification of a sub-set of climate model projections from the many typically available. Sub-selection has potential benefits, including making large datasets more meaningful and uncovering underlying relationships. We examine the ability of seven sub-selection methods to capture low flow and drought characteristics simulated from a large ensemble of climate models for two catchments. Methods include Multi-Cluster Feature Selection (MCFS), Unsupervised Discriminative Features Selection (UDFS), Diversity-Induced Self-Representation (DISR), Laplacian score (L Score), Structure Preserving Unsupervised Feature Selection (SPUFS), Non-convex Regularized Self-Representation (NRSR) and Katsavounidis–Kuo–Zhang (KKZ). We find that sub-selection methods perform differently in capturing varying aspects of the parent ensemble, i.e. median, lower or upper bounds. They also vary in their effectiveness by catchment, flow metric and season, making it very difficult to identify a best sub-selection method for widespread application. Rather, researchers need to carefully judge sub-selection performance based on the aims of their study, the needs of adaptation decision making and flow metrics of interest, on a catchment by catchment basi

    AutoEncoder Inspired Unsupervised Feature Selection

    Full text link
    High-dimensional data in many areas such as computer vision and machine learning tasks brings in computational and analytical difficulty. Feature selection which selects a subset from observed features is a widely used approach for improving performance and effectiveness of machine learning models with high-dimensional data. In this paper, we propose a novel AutoEncoder Feature Selector (AEFS) for unsupervised feature selection which combines autoencoder regression and group lasso tasks. Compared to traditional feature selection methods, AEFS can select the most important features by excavating both linear and nonlinear information among features, which is more flexible than the conventional self-representation method for unsupervised feature selection with only linear assumptions. Experimental results on benchmark dataset show that the proposed method is superior to the state-of-the-art method.Comment: accepted by ICASSP 201

    Unsupervised feature learning with discriminative encoder

    Full text link
    In recent years, deep discriminative models have achieved extraordinary performance on supervised learning tasks, significantly outperforming their generative counterparts. However, their success relies on the presence of a large amount of labeled data. How can one use the same discriminative models for learning useful features in the absence of labels? We address this question in this paper, by jointly modeling the distribution of data and latent features in a manner that explicitly assigns zero probability to unobserved data. Rather than maximizing the marginal probability of observed data, we maximize the joint probability of the data and the latent features using a two step EM-like procedure. To prevent the model from overfitting to our initial selection of latent features, we use adversarial regularization. Depending on the task, we allow the latent features to be one-hot or real-valued vectors and define a suitable prior on the features. For instance, one-hot features correspond to class labels and are directly used for the unsupervised and semi-supervised classification task, whereas real-valued feature vectors are fed as input to simple classifiers for auxiliary supervised discrimination tasks. The proposed model, which we dub discriminative encoder (or DisCoder), is flexible in the type of latent features that it can capture. The proposed model achieves state-of-the-art performance on several challenging tasks.Comment: 10 pages, 4 figures, International Conference on Data Mining, 201

    Unsupervised spectral sub-feature learning for hyperspectral image classification

    Get PDF
    Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods
    corecore