1,617 research outputs found

    Techno-economics optimization of H2 and CO2 compression for renewable energy storage and power-to-gas applications

    Get PDF
    The decarbonization of the industrial sector is imperative to achieve a sustainable future. Carbon capture and storage technologies are the leading options, but lately the use of CO2 is also being considered as a very attractive alternative that approaches a circular economy. In this regard, power to gas is a promising option to take advantage of renewable H2 by converting it, together with the captured CO2, into renewable gases, in particular renewable methane. As renewable energy production, or the mismatch between renewable production and consumption, is not constant, it is essential to store renewable H2 or CO2 to properly run a methanation installation and produce renewable gas. This work analyses and optimizes the system layout and storage pressure and presents an annual cost (including CAPEX and OPEX) minimization. Results show the proper compression stages need to achieve the storage pressure that minimizes the system cost. This pressure is just below the supercritical pressure for CO2 and at lower pressures for H2, around 67 bar. This last quantity is in agreement with the usual pressures to store and distribute natural gas. Moreover, the H2 storage costs are higher than that of CO2, even with lower mass quantities; this is due to the lower H2 density compared with CO2 . Finally, it is concluded that the compressor costs are the most relevant costs for CO2 compression, but the storage tank costs are the most relevant in the case of H

    Optimización del comportamiento

    Get PDF
    El comportamiento satisface u optimiza la obtención de un recurso en función de su abundancia y de la capacidad de utilizarlo por parte del individuo. La optimización del comportamiento se puede conseguir maximizando la tasa neta de obtención y manejo del recurso, la eficiencia o un valor intermedio entre la tasa neta y la eficiencia, dependiendo de las circunstancias en las que el individuo ejecuta una pauta de comportamiento. Los tipos de comportamiento más frecuentes en los análisis de optimización son la composición de la dieta, el tiempo de permanencia en un lugar y cualquier decisión que incluya el retorno a un lugar central. En todos ellos se debe estimar la diferencia entre el valor del recurso que el individuo ha seleccionado con respecto al valor promedio de ese recurso en el ambiente donde el animal puede desplazarse. La variabilidad temporal y en cantidad del recurso que se puede obtener favorece a los animales que son sensibles a la variabilidad o riesgo, los cuales maximizan la tasa de obtención del recurso a corto plazo. Dependiendo de sus reservas y expectativas de obtención del recurso, los individuos sensibles al riesgo evitan las situaciones con alta variabilidad temporal cuando sus reservas son altas y las expectativas bajas. Cuando la variabilidad afecta a la cantidad de recurso, los animales con pocas reservas pueden optar por el riesgo para la obtención del recurso, pero este fenómeno de sensibilidad al riesgo en cantidad de recurso es menos frecuente a la sensibilidad al riesgo temporal. La optimización del comportamiento en presencia de otros individuos se puede clasificar en dos grandes categorías: economías de agregación (tasa de obtención de recursos aumenta en ciertos tamaños de grupo) y economías de dispersión (tasa de obtención de recursos disminuye con el tamaño de grupo). En una economía de agregación, la tasa de obtención de recursos suele tener un pico máximo a un tamaño de grupo óptimo, aunque los grupos pueden agrandarse hasta llegar al tamaño estable. Sin embargo, grupos mayores que el tamaño estable se consideran dentro de un equilibrio inestable debido a que la tasa de obtención de recursos es menor que si el individuo se alimentara solitariamente. En una economía de dispersión, la presencia de otros individuos induce cambios en la selección del lugar de obtención del recurso, de manera que en circunstancias de renovación constante del recurso y tiempo suficiente para el cambio de lugar se produce una distribución de los individuos entre zonas hasta alcanzar un equilibrio en, por ejemplo, la maximización de la tasa neta de obtención del recurso, lo que se denomina una distribución libre-ideal de los individuos. En general, la optimización del comportamiento no suele ser perfecta, lo que se traduce en una distribución subóptima de los individuos, que ocupan en mayor proporción las zonas con menor cantidad de recurso. Con independencia de las circunstancias en los que los animales optimizan el comportamiento y de las reglas que maximizan el beneficio obtenido, los modelos de optimización se han revelado como una herramienta útil para investigar el comportamiento. La posibilidad de que el individuo maximice la tasa de obtención de un recurso induce al investigador a plantear tal posibilidad como una hipótesis nula en su trabajo, de manera que es preciso estimar el valor del recurso que el animal intenta conseguir, los tiempo de obtención y manejo y los costes de su obtención, así como su valor marginal y el efecto que puede tener la competencia o simplemente la interferencia con otros individuos del grupo en la optimización del comportamiento. El rechazo de la optimización como hipótesis nula puede llevar a nuevos descubrimientos sobre las restricciones, límites e incluso nuevas reglas de comportamiento, que de manera contraintuitiva pueden revelar pautas de comportamiento subóptimas pero adaptativasPeer reviewe

    Comparing the Min–Max–Median/IQR Approach with the Min–Max Approach, Logistic Regression and XGBoost, maximising the Youden index

    Get PDF
    Although linearly combining multiple variables can provide adequate diagnostic performance, certain algorithms have the limitation of being computationally demanding when the number of variables is sufficiently high. Liu et al. proposed the min–max approach that linearly combines the minimum and maximum values of biomarkers, which is computationally tractable and has been shown to be optimal in certain scenarios. We developed the Min–Max–Median/IQR algorithm under Youden index optimisation which, although more computationally intensive, is still approachable and includes more information. The aim of this work is to compare the performance of these algorithms with well-known Machine Learning algorithms, namely logistic regression and XGBoost, which have proven to be efficient in various fields of applications, particularly in the health sector. This comparison is performed on a wide range of different scenarios of simulated symmetric or asymmetric data, as well as on real clinical diagnosis data sets. The results provide useful information for binary classification problems of better algorithms in terms of performance depending on the scenario

    Temporal variation of soil sorptivity under conventional and no-till systems determined by a simple laboratory method

    Get PDF
    Soil water sorptivity (S) is an important property that measures the soil capacity to take water rapidly under capillary forces. Usually S is not included in soil laboratory routine experiments because there is not a widely accepted methodology for its determination. The objectives of this work were: i) to propose a modification on the Leeds-Harrison et al. (1994) method (LH) to determine S in undisturbed soil samples; and ii) to determine the temporal variation of S and saturated hydraulic conductivity (K0) in a soil under conventional tillage (CT) and no-tillage (NT) treatments. Additionally, the influence of soil pore size distribution (PoSD) on S was analyzed. Undisturbed soil samples (5 cm height, 5 cm diameter) were collected from the upper 10 depth cm of each plot, from each treatment at four different times during a maize growing season (before seeding (BS), 6 leaf stage (V6), physiological maturity (R5) and after harvest (AH)). PoSD was determined in a sand box apparatus. After that, S was determined in the same samples using a modified Leeds-Harrison approach. For the proposed modification the difference between initial and final water content was actually gravimetrically measured in each sample, rather than considering it equal to the total porosity (TP). The proposed improvement was validated comparing the obtained S values with those calculated using standard one-dimension horizontal infiltration in sieved soil (0.098 vs 0.079 cm s-1/2, respectively) and in calibrated sand (0.041 vs 0.040 cm s-1/2, respectively). These differences were not significant. Both S and K0 were significantly affected by the sampling time in both treatments (mean values ranged between 0.022 and 0.077 cm s-1/2 and 1.57 and 3.75 cm s-1 respectively). We did not find a significant dependence of S with three pore size ranges analyzed. The proposed improvement of the Leeds-Harrison method allowed determining the temporal variation of S in representative undisturbed soil samples.Facultad de Ciencias Agrarias y Forestale

    An analytical model for GMPLS control plane resilience quantification

    Get PDF
    This paper concentrates on the resilience of the Generalized Multi-Protocol Label Switching (GMPLS) enabled control plane. To this end, the problem of control plane resilience in GMPLS-controlled networks is firstly stated and previous work on the topic reviewed. Next, analytical formulae to quantify the resilience of generic meshed control plane topologies are derived. The resulting model is validated by simulation results on several reference network scenarios.Postprint (published version

    The True Utility of Predictive Models Based on Magnetic Resonance Imaging in Selecting Candidates for Prostate Biopsy

    Get PDF
    Biòpsia de pròstata; Models predictius; Imatges per ressonància magnèticaProstate biopsy; Predictive models; Magnetic resonance imagingBiopsia de próstata; Modelos predictivos; Imagen de resonancia magnétic

    A stepwise algorithm for linearly combining biomakers under Youden Index maximisation

    Get PDF
    Combining multiple biomarkers to provide predictive models with a greater discriminatory ability is a discipline that has received attention in recent years. Choosing the probability threshold that corresponds to the highest combined marker accuracy is key in disease diagnosis. The Youden index is a statistical metric that provides an appropriate synthetic index for diagnostic accuracy and a good criterion for choosing a cut-off point to dichotomize a biomarker. In this study, we present a new stepwise algorithm for linearly combining continuous biomarkers to maximize the Youden index. To investigate the performance of our algorithm, we analyzed a wide range of simulated scenarios and compared its performance with that of five other linear combination methods in the literature (a stepwise approach introduced by Yin and Tian, the min-max approach, logistic regression, a parametric approach under multivariate normality and a non-parametric kernel smoothing approach). The obtained results show that our proposed stepwise approach showed similar results to other algorithms in normal simulated scenarios and outperforms all other algorithms in non-normal simulated scenarios. In scenarios of biomarkers with the same means and a different covariance matrix for the diseased and non-diseased population, the min-max approach outperforms the rest. The methods were also applied on two real datasets (to discriminate Duchenne muscular dystrophy and prostate cancer), whose results also showed a higher predictive ability in our algorithm in the prostate cancer databas
    corecore