2,641 research outputs found

    Stochastic techniques for the design of robust and efficient emission trading mechanisms

    Get PDF
    The assessment of greenhouse gases (GHGs) emitted to and removed from the atmosphere is highon both political and scientific agendas internationally. As increasing international concern and cooper- ation aim at policy-oriented solutions to the climate change problem, several issues have begun to arise regarding verification and compliance under both proposed and legislated schemes meant to reduce the human-induced global climate impact. The issues of concern are rooted in the level of confidence with which national emission assessments can be performed, as well as the management of uncertainty and its role in developing informed policy. The approaches to addressing uncertainty that was discussed at the 2nd International Workshop on Uncertainty in Greenhouse Gas Inventories 1 attempt to improve national inventories or to provide a basis for the standardization of inventory estimates to enable comparison of emissions and emission changes across countries. Some authors use detailed uncertainty analyses to enforce the current structure of the emissions trading system while others attempt to internalize high levels of uncertainty by tailoring the emissions trading market rules. In all approaches, uncertainty analysis is regarded as a key component of national GHG inventory analyses. This presentation will provide an overview of the topics that are discussed among scientists at the aforementioned workshop to support robust decision making. These range from achieving and report- ing GHG emission inventories at global, national and sub-national scales; to accounting for uncertainty of emissions and emission changes across these scales; to bottom-up versus top-down emission analy- ses; to detecting and analyzing emission changes vis-a-vis their underlying uncertainties; to reconciling short-term emission commitments and long-term concentration targets; to dealing with verification, com- pliance and emissions trading; to communicating, negotiating and effectively using uncertainty

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Robust fuzzyclustering for object recognition and classification of relational data

    Get PDF
    Prototype based fuzzy clustering algorithms have unique ability to partition the data while detecting multiple clusters simultaneously. However since real data is often contaminated with noise, the clustering methods need to be made robust to be useful in practice. This dissertation focuses on robust detection of multiple clusters from noisy range images for object recognition. Dave\u27s noise clustering (NC) method has been shown to make prototype-based fuzzy clustering techniques robust. In this work, NC is generalized and the new NC membership is shown to be a product of fuzzy c-means (FCM) membership and robust M-estimator weight (or possibilistic membership). Thus the generalized NC approach is shown to have the partitioning ability of FCM and robustness of M-estimators. Since the NC (or FCM) algorithms are based on fixed-point iteration technique, they suffer from the problem of initializations. To overcome this problem, the sampling based robust LMS algorithm is considered by extending it to fuzzy c-LMS algorithm for detecting multiple clusters. The concept of repeated evidence has been incorporated to increase the speed of the new approach. The main problem with the LMS approach is the need for ordering the distance data. To eliminate this problem, a novel sampling based robust algorithm is proposed following the NC principle, called the NLS method, that directly searches for clusters in the maximum density region of the range data without requiring the specification of number of clusters. The NC concept is also introduced to several fuzzy methods for robust classification of relational data for pattern recognition. This is also extended to non-Euclidean relational data. The resulting algorithms are used for object recognition from range images as well as for identification of bottleneck parts while creating desegregated cells of machine/ components in cellular manufacturing and group technology (GT) applications

    A spatial-type interval-valued median for random intervals

    Get PDF
    © 2018 Informa UK Limited, trading as Taylor & Francis Group. To estimate the central tendency or location of a sample of interval-valued data, a standard statistic is the interval-valued sample mean. Its strong sensitivity to outliers or data changes motivates the search for more robust alternatives. In this respect, a more robust location statistic is studied in this paper. This measure is inspired by the concept of spatial median and makes use of the versatile generalized Bertoluzza's metric between intervals, the so-called dθ distance. The problem of minimizing the mean dθ distance to the values the random interval takes, which defines the spatial-type dθ-median, is analysed. Existence and uniqueness of the sample version are shown. Furthermore, the robustness of this proposal is investigated by deriving its finite sample breakdown point. Finally, a real-life example from the Economics field illustrates the robustness of the sample dθ-median, and simulation studies show some comparisons with respect to the mean and several recently introduced robust location measures for interval-valued data.status: publishe

    Scalar Quantization as Sparse Least Square Optimization

    Full text link
    Quantization can be used to form new vectors/matrices with shared values close to the original. In recent years, the popularity of scalar quantization for value-sharing applications has been soaring as it has been found huge utilities in reducing the complexity of neural networks. Existing clustering-based quantization techniques, while being well-developed, have multiple drawbacks including the dependency of the random seed, empty or out-of-the-range clusters, and high time complexity for a large number of clusters. To overcome these problems, in this paper, the problem of scalar quantization is examined from a new perspective, namely sparse least square optimization. Specifically, inspired by the property of sparse least square regression, several quantization algorithms based on l1l_1 least square are proposed. In addition, similar schemes with l1+l2l_1 + l_2 and l0l_0 regularization are proposed. Furthermore, to compute quantization results with a given amount of values/clusters, this paper designed an iterative method and a clustering-based method, and both of them are built on sparse least square. The paper shows that the latter method is mathematically equivalent to an improved version of k-means clustering-based quantization algorithm, although the two algorithms originated from different intuitions. The algorithms proposed were tested with three types of data and their computational performances, including information loss, time consumption, and the distribution of the values of the sparse vectors, were compared and analyzed. The paper offers a new perspective to probe the area of quantization, and the algorithms proposed can outperform existing methods especially under some bit-width reduction scenarios, when the required post-quantization resolution (number of values) is not significantly lower than the original number

    Sistemas granulares evolutivos

    Get PDF
    Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princípio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saída. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric
    corecore