75,624 research outputs found

    Entropy generation analysisfor the design optimizationof solid oxide fuel cells

    Get PDF
    Purpose - The aim of this paper is to investigate performance improvements of a monolithic solid oxide fuel cell geometry through an entropy generation analysis. Design/methodology/approach - The analysis of entropy generation rates makes it possible to identify the phenomena that cause the main irreversibilities in the fuel cell, to understand their causes and to propose changes in the design and operation of the system. The various contributions to entropy generation are analyzed separately in order to identify which geometrical parameters should be considered as the independent variables in the optimization procedure. The local entropy generation rates are obtained through 3D numerical calculations, which account for the heat, mass, momentum, species and current transport. The system is then optimized in order to minimize the overall entropy generation and increase efficiency. Findings - In the optimized geometry, the power density is increased by about 10 per cent compared to typical designs. In addition, a 20 per cent reduction in the fuel cell volume can be achieved with less than a 1 per cent reduction in the power density with respect to the optimal design. Research limitations/implications - The physical model is based on a simple composition of the reactants, which also implies that no chemical reactions (water gas shift, methane steam reforming, etc.) take place in the fuel cell. Nevertheless, the entire procedure could be applied in the case of different gas compositions. Practical implications - Entropy generation analysis allows one to identify the geometrical parameters that are expected to play important roles in the optimization process and thus to reduce the free independent variables that have to be considered. This information may also be used for design improvement purposes. Originality/value - In this paper, entropy generation analysis is used for a multi-physics problem that involves various irreversible terms, with the double use of this physical quantity: as a guide to select the most relevant design geometrical quantities to be modified and as objective function to be minimized in the optimization proces

    Chance-Constrained Outage Scheduling using a Machine Learning Proxy

    Full text link
    Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates

    Information Theoretical Estimators Toolbox

    Get PDF
    We present ITE (information theoretical estimators) a free and open source, multi-platform, Matlab/Octave toolbox that is capable of estimating many different variants of entropy, mutual information, divergence, association measures, cross quantities, and kernels on distributions. Thanks to its highly modular design, ITE supports additionally (i) the combinations of the estimation techniques, (ii) the easy construction and embedding of novel information theoretical estimators, and (iii) their immediate application in information theoretical optimization problems. ITE also includes a prototype application in a central problem class of signal processing, independent subspace analysis and its extensions.Comment: 5 pages; ITE toolbox: https://bitbucket.org/szzoli/ite

    Optimal statistical inference in the presence of systematic uncertainties using neural network optimization based on binned Poisson likelihoods with nuisance parameters

    Get PDF
    Data analysis in science, e.g., high-energy particle physics, is often subject to an intractable likelihood if the observables and observations span a high-dimensional input space. Typically the problem is solved by reducing the dimensionality using feature engineering and histograms, whereby the latter technique allows to build the likelihood using Poisson statistics. However, in the presence of systematic uncertainties represented by nuisance parameters in the likelihood, the optimal dimensionality reduction with a minimal loss of information about the parameters of interest is not known. This work presents a novel strategy to construct the dimensionality reduction with neural networks for feature engineering and a differential formulation of histograms so that the full workflow can be optimized with the result of the statistical inference, e.g., the variance of a parameter of interest, as objective. We discuss how this approach results in an estimate of the parameters of interest that is close to optimal and the applicability of the technique is demonstrated with a simple example based on pseudo-experiments and a more complex example from high-energy particle physics
    • …
    corecore