234 research outputs found

    The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

    Get PDF
    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex

    Improving the optimisation performance of an ensemble of radial basis functions

    No full text
    In this paper we investigate surrogate-based optimisation performance using two different ensemble approaches, and a novel update strategy based on the local Pearson correlation coefficient. The ?first ensemble, is based on a selective approach, where ns RBFs are constructed and the most accurate RBF is selected for prediction at each iteration, while the others are ignored. The secondensemble uses a combined approach, which takes advantage of ns different RBFs, in the hope of reducing errors in the prediction through a weighted combination of the RBFs used. The update strategy uses the local Pearson correlation coefficient as a constraint to ignore domain areas wherethere is disagreement between the surrogates. In total the performance of six different approaches are investigated, using ?five analytical test functions with 2 to 50 dimensions, and one engineering problem related to the frequency response of a satellite boom with 2 to 40 dimensions

    Linear and Order Statistics Combiners for Pattern Classification

    Full text link
    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.Comment: 31 page

    Partitioning the impacts of streamflow and evaporation uncertainty on the operations of multipurpose reservoirs in arid regions

    Get PDF
    Ongoing changes in global climate are expected to alter the hydrologic regime of many river basins worldwide, expanding historically observed variability as well as increasing the frequency and intensity of extreme events. Understanding the vulnerabilities of water systems under such uncertain and variable hydrologic conditions is key to supporting strategic planning and design adaptation options. In this paper, we contribute a multiobjective assessment of the impacts of hydrologic uncertainty on the operations of multipurpose water reservoirs systems in arid climates. We focus our analysis on the Dez and Karoun river system in Iran, which is responsible for the production of more than 20% of the total hydropower generation of the country. A system of dams controls most of the water flowing to the lower part of the basin, where irrigation and domestic supply are strategic objectives, along with flood protection.We first design the optimal operations of the system using observed inflows and evaporation rates. Then, we simulate the resulting solutions over different ensembles of stochastic hydrology to partition the impacts of streamflow and evaporation uncertainty. Numerical results show that system operations are extremely sensitive to alterations of both uncertainty sources. In particular, we show that in this arid river basin, long-term objectives are mainly vulnerable to inflow uncertainty, whereas evaporation rate uncertainty mostly affects short-term objectives. Our results suggest that local water authorities should properly characterize hydrologic uncertainty in the design of future operations of the expanded network of reservoirs, possibly also investing in the improvement of the existing monitoring network to obtain more reliable data for modeling streamflow and evaporation processes

    A Simulated Annealing Based Optimization Algorithm

    Get PDF
    In modern engineering finding an optimal design is formulated as an optimization problem which involves evaluating a computationally expensive black-box function. To alleviate these difficulties, such problems are often solved by using a metamodel, which approximates the computer simulation and provides predicted values at a much lower computational cost. While metamodels can significantly improve the efficiency of the design process, they also introduce several challenges, such as a high evaluation cost, the need to effectively search the metamodel landscape and to locate good solutions, and the selection of which metamodel is most suitable to the problem being solved. To address these challenges, this chapter proposes an algorithm that uses a hybrid simulated annealing and SQP search to effectively search the metamodel. It also uses ensembles that combine prediction of several metamodels to improve the overall prediction accuracy. To further improve the ensemble accuracy, it adapts the ensemble topology during the search. Finally, to ensure convergence to a valid optimum in the presence of metamodel inaccuracies, the proposed algorithm operates within a trust-region framework. An extensive performance analysis based on both mathematical test functions and an engineering application shows the effectiveness of the proposed algorithm

    Designing robust forecasting ensembles of Data-Driven Models with a Multi-Objective Formulation: An application to Home Energy Management Systems

    Get PDF
    This work proposes a procedure for the multi-objective design of a robust forecasting ensemble of data-driven models. Starting with a data-selection algorithm, a multi-objective genetic algorithm is then executed, performing topology and feature selection, as well as parameter estimation. From the set of non-dominated or preferential models, a smaller sub-set is chosen to form the ensemble. Prediction intervals for the ensemble are obtained using the covariance method. This procedure is illustrated in the design of four different models, required for energy management systems. Excellent results were obtained by this methodology, superseding the existing alternatives. Further research will incorporate a robustness criterion in MOGA, and will incorporate the prediction intervals in predictive control techniques.Grant number 72581/2020info:eu-repo/semantics/publishedVersio
    • …
    corecore