3,083 research outputs found

    Evaluation of Sub-Selection Methods for Assessing Climate Change Impacts on Low-Flow and Hydrological Drought Conditions

    Get PDF
    A challenge for climate impact studies is the identification of a sub-set of climate model projections from the many typically available. Sub-selection has potential benefits, including making large datasets more meaningful and uncovering underlying relationships. We examine the ability of seven sub-selection methods to capture low flow and drought characteristics simulated from a large ensemble of climate models for two catchments. Methods include Multi-Cluster Feature Selection (MCFS), Unsupervised Discriminative Features Selection (UDFS), Diversity-Induced Self-Representation (DISR), Laplacian score (L Score), Structure Preserving Unsupervised Feature Selection (SPUFS), Non-convex Regularized Self-Representation (NRSR) and Katsavounidis–Kuo–Zhang (KKZ). We find that sub-selection methods perform differently in capturing varying aspects of the parent ensemble, i.e. median, lower or upper bounds. They also vary in their effectiveness by catchment, flow metric and season, making it very difficult to identify a best sub-selection method for widespread application. Rather, researchers need to carefully judge sub-selection performance based on the aims of their study, the needs of adaptation decision making and flow metrics of interest, on a catchment by catchment basi

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation

    Full text link
    Sparse coding algorithm is an learning algorithm mainly for unsupervised feature for finding succinct, a little above high - level Representation of inputs, and it has successfully given a way for Deep learning. Our objective is to use High - Level Representation data in form of unlabeled category to help unsupervised learning task. when compared with labeled data, unlabeled data is easier to acquire because, unlike labeled data it does not follow some particular class labels. This really makes the Deep learning wider and applicable to practical problems and learning. The main problem with sparse coding is it uses Quadratic loss function and Gaussian noise mode. So, its performs is very poor when binary or integer value or other Non- Gaussian type data is applied. Thus first we propose an algorithm for solving the L1 - regularized convex optimization algorithm for the problem to allow High - Level Representation of unlabeled data. Through this we derive a optimal solution for describing an approach to Deep learning algorithm by using sparse code.Comment: 4 Pages, 3 Figures, 2014 World Congress on Computing and Communication Technologies (WCCCT
    • …
    corecore