573 research outputs found

    Toward Machine Learned Highly Reduce Kinetic Models For Methane/Air Combustion

    Full text link
    Accurate low dimension chemical kinetic models for methane are an essential component in the design of efficient gas turbine combustors. Kinetic models coupled to computational fluid dynamics (CFD) provide quick and efficient ways to test the effect of operating conditions, fuel composition and combustor design compared to physical experiments. However, detailed chemical kinetic models are too computationally expensive for use in CFD. We propose a novel data orientated three-step methodology to produce compact models that replicate a target set of detailed model properties to a high fidelity. In the first step, a reduced kinetic model is obtained by removing all non-essential species from the detailed model containing 118 species using path flux analysis (PFA). It is then numerically optimised to replicate the detailed model's prediction in two rounds; First, to selected species (OH,H,CO and CH4) profiles in perfectly stirred reactor (PSR) simulations and then re-optimised to the detailed model's prediction of the laminar flame speed. This is implemented by a purposely developed Machine Learned Optimisation of Chemical Kinetics (MLOCK) algorithm. The MLOCK algorithm systematically perturbs all three Arrhenius parameters for selected reactions and assesses the suitability of the new parameters through an objective error function which quantifies the error in the compact model's calculation of the optimisation target. This strategy is demonstrated through the production of a 19 species and a 15 species compact model for methane/air combustion. Both compact models are validated across a range of 0D and 1D calculations across both lean and rich conditions and shows good agreement to the parent detailed mechanism. The 15 species model is shown to outperform the current state-of-art models in both accuracy and range of conditions the model is valid over.Comment: Conference Paper: ASME Turbo Expo 202

    Development of a generalised kinetic model for the combustion of hydrocarbon fuels

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 73-76).The aim of this work is to find a generalised model for the combustion of hydrocarbons. Predicted temperature-time profiles can be obtained from detailed combustion kinetics, which can be used to derive a generalised model. If the generalised model can predict results from the detailed model it can be applied in computational fluid dynamics code where detailed kinetic mechanisms cannot.A generalised kinetic model is proposed, adapting the Schreiber model (Schreiber et al., 1994) to accurately predict the combustion behaviour of hydrocarbon fuels. The combustion behaviour is described through the characteristics of the temperature-time profiles and the ignition delay diagram, which include two stage ignition and the negative temperature co-efficient region. The Schreiber model is specifically adapted to improve the description of the very low temperature rise before and between ignitions and the auto-catalytic temperature rises during ignition. Using a Genetic Algorithm to optimise the prediction of the proposed model, the pre-exponent factor Ai and the activation energy Eai are the adjustable parameters which are optimised for each reaction in the model. These parameters have been optimised for three fuels: i-octane, n-heptane and methanol. The ignition delays of the pure fuels were accurately predicted. The temperature-time profiles in the instances of two stage ignition are relatively inaccurate. The temperature profiles are however an improvement on the temperature profiles predicted by the Schreiber model, particularly in terms of the slow temperature rise during the ignition delay andthe sharp temperature rise during ignition. The combustion of the binary blends of the three fuels have been predicted using model parameters which are found using the rate constants of each fuel, the blends composition and binary interaction rules. The binary interaction parameters were also optimised using a Genetic Algorithm. The binary interaction rules are based on the Peng-Robinson mixing rules. Overall the ignition delays of binary fuel blends were accurately predicted using binary interactions. However, when modelling the blends between methanol and n-heptane, where one fuel has extreme NTC behaviour and the other fuel has no NTC behaviour, the predictions were less accurate. These binary interaction rules are then used to model ternary mixtures. It is shown that the combustion behaviour of ternary mixtures of the three fuels can be accurately predicted without any further regression or parameter fitting. The accuracy of the ternary prediction is dependent on the accuracy of the binary predictions

    Marine dual fuel engines modelling and optimisation employing : a novel combustion characterisation method

    Get PDF
    Dual fuel (DF) engines have been an attractive alternative of traditional diesel engines for reducing both the environmental impact and operating cost. The major challenge of DF engine design is to deal with the performance-emissions trade-off via operating settings optimisation. Nevertheless, determining the optimal solution requires large amount of case studies, which could be both time-consuming and costly in cases where methods like engine test or Computational Fluid Dynamics (CFD) simulation are directly used to perform the optimisation. This study aims at developing a novel combustion characterisation method for marine DF engines based on the combined use of three-dimensional (3D) simulation and zero-dimensional/one-dimensional (0D/1D) simulation methods. The 3D model is developed with the CONVERGE software and validated by employing the measured pressure and emissions. Subsequently, the validated 3D model is used to perform a parametric study to explore the engine operating settings that allow simultaneous reduction of the brake specific fuel consumption (BSFC) and NOx emissions at three engine operation conditions (1457 r/min, 1629 r/min and 1800 r/min). Furthermore, the derived heat release rate (HRR) is employed to calibrate the 0D Wiebe combustion model by using Response Surface Methodology (RSM). A linear response model for the Wiebe combustion function parameters is proposed by considering each Wiebe parameter as a function of the pilot injection timing, equivalence ratio and natural gas mass. The 0D/1D model is established in the GT-ISE software and used to optimise the performance-emissions trade-off of the reference engine by employing the Nondominated Sorting Genetic Algorithm II (NSGA II). The obtained results provide a comprehensive insight on the impacts of the involved engine operating settings on in-cylinder combustion characteristics, engine performance and emissions of the investigated marine DF engine. By performing the settings optimisation at three engine operating points, settings that lead to reduced BSFC are identified, whilst the NOx emissions comply with the Tier III NOx emissions regulation. The proposed novel method is expected to support the combustion analysis and enhancement of marine DF engines during the design phase, whilst the derived optimal solution is expected to provide guidelines of DF engine management for reducing operating cost and environmental footprint.Dual fuel (DF) engines have been an attractive alternative of traditional diesel engines for reducing both the environmental impact and operating cost. The major challenge of DF engine design is to deal with the performance-emissions trade-off via operating settings optimisation. Nevertheless, determining the optimal solution requires large amount of case studies, which could be both time-consuming and costly in cases where methods like engine test or Computational Fluid Dynamics (CFD) simulation are directly used to perform the optimisation. This study aims at developing a novel combustion characterisation method for marine DF engines based on the combined use of three-dimensional (3D) simulation and zero-dimensional/one-dimensional (0D/1D) simulation methods. The 3D model is developed with the CONVERGE software and validated by employing the measured pressure and emissions. Subsequently, the validated 3D model is used to perform a parametric study to explore the engine operating settings that allow simultaneous reduction of the brake specific fuel consumption (BSFC) and NOx emissions at three engine operation conditions (1457 r/min, 1629 r/min and 1800 r/min). Furthermore, the derived heat release rate (HRR) is employed to calibrate the 0D Wiebe combustion model by using Response Surface Methodology (RSM). A linear response model for the Wiebe combustion function parameters is proposed by considering each Wiebe parameter as a function of the pilot injection timing, equivalence ratio and natural gas mass. The 0D/1D model is established in the GT-ISE software and used to optimise the performance-emissions trade-off of the reference engine by employing the Nondominated Sorting Genetic Algorithm II (NSGA II). The obtained results provide a comprehensive insight on the impacts of the involved engine operating settings on in-cylinder combustion characteristics, engine performance and emissions of the investigated marine DF engine. By performing the settings optimisation at three engine operating points, settings that lead to reduced BSFC are identified, whilst the NOx emissions comply with the Tier III NOx emissions regulation. The proposed novel method is expected to support the combustion analysis and enhancement of marine DF engines during the design phase, whilst the derived optimal solution is expected to provide guidelines of DF engine management for reducing operating cost and environmental footprint

    A multi-zone model of the CFR engine : investigating cascading autoignition and octane rating

    Get PDF
    Includes abstract.Includes bibliographical references (p. 103-109).The CFR engine is the standardised research engine used for the measurement of knock resistance of fuels through the Research Octane Number (RON) and Motor Octane Number(MON) tests. In standard production engines, knock manifests as an almost instantaneous pressure rise followed by knock ringing" pressure oscillations of similar magnitude. However, knock in the CFR engine is characterised, and measured by, a steep, but more gradual pressure rise, followed by ringing of much lesser magnitude. It has been previously proposed that a cascading autoignition", resulting from an in-cylinder temperature gradient, is responsible for this unique pressure development

    Numerical optimisation for model evaluation in combustion kinetics

    Get PDF
    Numerical optimisation related to the estimation of kinetic parameters and model evaluation is playing an increasing role in combustion as well as in other areas of applied energy research. The present work aims at presenting the current probability-based approaches along applications to real problems of combustion chemical kinetics. The main methods related to model and parameter evaluation have been explicated. An in-house program for the systematic adjustment of kinetic parameters to experimental measurements has been described and numerically validated. The GRI (Gas research institute) mechanism (version 3.0) has been shown to initially lead to results which are greatly at variance with experimental data concerning the combustion of CH3CH3 and C2H6C2H6. A thorough optimisation of all parameters has been performed with respect to these profiles. A considerable improvement could be reached and the new predictions appear to be compatible with the measurement uncertainties. It was also found that neither GRI 3.0 nor three other reaction mechanisms considered during the present work should be employed (without prior far-reaching optimisation) for numerical simulations of combustors and engines where CH3CH3 and C2H6C2H6 play an important role. Overall, this study illustrates the link between optimisation methods and model evaluation in the field of combustion chemical kinetics

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    The Use of Numerical Methods to Interpret Polymer Decomposition Data

    Get PDF
    Polymer decomposition is the key to understanding fire behaviour. It is a complex process involving heat transfer, breakdown of the polymer chain, volatile fuel formation and gasification occurring as a moving interface through the polymer bulk. Two techniques, chemical analysis using STA-FTIR, and pyrolysis modelling have been combined as a tool to better understand these processes. This work covers the experimental investigation of polymer decomposition using the STA-FTIR technique. Several polymers including polyacrylonitrile (PAN), polypropylene (PP) and ethylene-vinyl acetate (EVA) alone, and as potential fire retardant composites have been studied in different conditions to optimise the methodology and analysis of results. Polyacrylonitrile was used to optimise the experimental technique. Polypropylene, containing nanoclay and ammonium phosphate, was decomposed and the composition of the decomposition products analysed in order to investigate the fire retardant effects of the additives on the thermal decomposition. Ethylene-vinyl acetate copolymer containing nanoclay and/or either aluminium hydroxide or magnesium hydroxide was decomposed, with vapour phase FTIR analysis showing a change in the initial decomposition pathway with a shift from acetic acid evolution, to acetone production. In parallel, this experimental data has been used to perform early attempts towards validation of numerical models developed by the use of a 1-dimensional pyrolysis computational tool called ThermaKin. As ThermaKin is relatively new and still not widely used for fire modelling, a detailed description of its capabilities has been provided. A detailed study of heat transfer of cardboard, leading to thermal decomposition, accompanied by pyrolysis and char formation has been described. Several microscale kinetics models have been proposed with different levels of complexity. Not only do the numerical approximations reflect the experimental results of single compounds, describing the material’s behaviour (expressed in terms of mass loss) when exposed to external heat, but also predictive models of fire retardant mixtures have been developed for different atmospheres and heating rates. In addition, the powerful combination of pyrolysis modelling and chemical analysis by STA-FTIR has provided new insights into the decomposition and burning behaviour of both PP protected with nanoclay and ammonium phosphate, but also the industrially important cable sheathing materials based on EVA. The novelty of this work stems from the first use of the pyrolysis models to study fire retardant behaviour; the first reported combination of STA-FTIR with ThermaKin pyrolysis model, and a deep understanding of the pre-ignition behaviour of cardboard

    Uncertainty and complexity in pyrolysis modelling

    Get PDF
    The use of numerical tools in fire safety engineering became usual nowadays and this tendency is expected to increase with the evolution of performance based design. Despite the constant development of fire modelling tools, the current state of the art is still not capable of predicting accurately solid ignition, flame spread or fire growth rate from first principles. The condensed phase, which plays an important role in these phenomena, has been a large research area since few decades, resulting in an improvement of its global understanding and in the development of numerical pyrolysis models including a large number of physical and chemical mechanisms. This growth of complexity in the models has been justified by the implicit assumption that models with a higher number of mechanisms should be more accurate. However, as direct consequence, the number of parameters required to perform a simulation increased significantly. The problem is when the uncertainty in the input parameters accumulates in the model output beyond a certain level. The global error induced by the parameters uncertainty balances the improvements obtained with the incorporation of new mechanisms, leading to the existence of an optimum of model complexity.While one of the first modelling tasks is to select the appropriate model to represent a physical phenomenon, this step is often subjective, and detailed justifications of the inclusion or exclusion of the different mechanisms are infrequent. The issue of how determining the most beneficial level of model complexity is becoming a major concern and this work presents a methodology to estimate the affordable level of complexity for polymer pyrolysis modelling prior ignition. The study is performed using PolyMethylMethAcrylate (PMMA) which is a reference material in fire dynamics due to the large number of studies available on its pyrolysis behaviour. The methodology employed is based on a combination of sensitivity and uncertainty analyses.In the first chapter, the minimum level of complexity required to explain the delay times to ignition of black PMMA samples at high heat flux levels is obtained by exploring one by one the effect on the condensed phase of several mechanisms. It is found that the experimental results cannot be explained without considering the in-depth radiation absorption mechanism. In the second chapter, a large literature review of the variability associated with the main parameters encountered in pyrolysis models is performed in order to establish the current level of confidence associated with the predictions using simple uncertainty analyses. In the third chapter, a detailed analysis of the governing parameters (parametric sensitivity) is performed on the model obtained in chapter 1 to predict the delay time to ignition. Using the ranges obtained in chapter 2 for the input parameters, a detailed uncertainty analysis is performed revealing a large spread of the numerical predictions outside the experimental uncertainty. While several parameters, including the attenuation coefficient (from the in-depth radiation absorption mechanism), present large sensitivity, only a few are responsible for the large spread observed. The parameter uncertainty is shown as the limiting step in the prediction of solid ignition. In the fourth chapter, a new methodology is developed in order to investigate the predominant mechanisms for the prediction of the transient pyrolysis behaviour of clear PMMA (no ignition). This approach, which corresponds to a mechanism sensitivity, consists of applying step-by-step assumptions to the most complex model used in the literature to model non-charring polymer pyrolysis behaviour. This study reveals the relatively high importance of the heat transfer mechanisms, including the process of in-depth radiation. In the fifth chapter, an investigation of the uncertainty related to the calibration of pyrolysis models by inverse modelling is performed using several levels of model complexity. Inverse modelling couples the experimental data to the model equations and this dependency is often ignored. Varying the model complexity, this study reveals the presence of compensation effects between the different mechanisms. The phenomenon grows in importance with model complexity leading to unrealistic values for the calibrated parameters.From the performed sensitivity and uncertainty analyses, the mechanism of in-depth absorption appeared critical for some applications. In the sixth chapter, an experimental investigation on specific conditions impacting the sensitivity of this mechanism shows its large dependency on the heat source emission wavelength when comparing the two heat sources of the most used pyrolysis test apparatuses in fire safety engineering. More fundamental investigations presented in the seventh chapter enabled to quantify this dependency that needs to be considered for modelling or experimental analyses. The impact of the heat source on the radiation absorption (depth and magnitude) is shown to be predictable thanks to the detailed measurements of the attenuation coefficient of PMMA and the emissive power of the heat sources. The global uncertainty associated with the input parameters, extracted either from independent studies or by inverse modelling, appears as a limiting step in the improvement of pyrolysis modelling when a high level of complexity is implemented. A combination of numerical (sensitivity and uncertainty) analyses and experimental studies is required before increasing the level of complexity of a pyrolysis model

    Thermo-kinetic multi-zone modelling of low temperature combustion engines

    Get PDF
    Many researchers believe multi-zone (MZ), chemical kinetics–based models are proven, essential toolchains for development of low-temperature combustion (LTC) engines. However, such models are specific to the research groups that developed them and are not widely available on a commercial nor open-source basis. Consequently, their governing assumptions vary, resulting in differences in autonomy, accuracy and simulation speed, all of which affect their applicability. Knowledge of the modelsÂŽ individual characteristics is scattered over the research groupsÂŽ publications, making it extremely difficult to see the bigger picture. This combination of disparities and dispersed information hinders the engine research community that wants to harness the capability of multi-zone modelling. This work aims to overcome these hurdles. It is a comprehensive review of over 120 works directly related to MZ modelling of LTC extended with an insight to primary sources covering individual submodels. It covers 16 distinctive modelling approaches, three different combustion concepts and over 60 different fuel/kinetic mechanism combination. Over 38 identified applications ranging from fundamental-level studies to control development. The work aims to provide sufficient detail of individual model design choices to facilitate creation of improved, more open multi-zone toolchains and inspire new applications. It also provides a high-level vision of how multi-zone models can evolve. The review identifies a state-of-the-art multi-zone model as an onion-skin model with 10–15 zones; phenomenological heat and mass transfer submodels with predictive in-cylinder turbulence; and semi-detailed reaction kinetics encapsulating 53-199 species. Together with submodels for heat loss, fuel injection and gas exchange, this modelling approach predicts in-cylinder pressure within cycle-to-cycle variation for a handful of combustion concepts, from homogeneous/premixed charge to reactivity-controlled compression ignition (HCCI, PCCI, RCCI). Single-core simulation time is around 30 minutes for implementations focused on accuracy: there are direct time-reduction strategies for control applications. Major tasks include a fast and predictive means to determine in-cylinder fuel stratification, and extending applicability and predictivity by coupling with commercial one-dimensional engine-modelling toolchains. There is also significant room for simulation speed-up by incorporating techniques such as tabulated chemistry and employing new solving algorithms that reduce cost of jacobian construction.© 2022 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Entwicklung und Anwendung einer auf genetischen Algorithmen basierten Methode zur Reduktion und Optimierung von chemischen kinetischen Mechanismen

    Get PDF
    An automatic method for the reduction and optimization of chemical kinetic mechanisms under specific physical or thermodynamic conditions has been developed and described in this work. The mechanism reduction method relies on the genetic algorithm (GA) search for a smallest possible subset of reactions from the detailed mechanism while still preserving the ability of the reduced mechanism to describe the overall chemistry at an acceptable error. Accuracy of the reduced mechanism is determined by comparing its solution to the solution obtained with the full mechanism under the same initial and/or physical conditions. For the reduction, not only the chemical accuracy and the size of the mechanism are considered but also the time for its solution which helps to avoid stiff and slow-converging mechanisms. The (subsequent) optimization technique is based on a genetic algorithm that aims at finding new reaction rate coefficients to restore the accuracy which is usually decreased by the preceding reduction process. The accuracy is defined by an objective function that covers regions of interest where the reduced mechanism may deviate from the original mechanism. The objective function directs the search towards more accurate reduced mechanisms that are valid for a given set of operating conditions. The mechanism's performance is assessed for homogeneous-reactor or laminar-flame simulations against the results obtained from a given reference. An additional term introduced to the objective function is a so-called penalty term that influences the reaction rates during the optimization. With the penalty term, the change to the reaction rates can be minimized, keeping them as close as possible to their nominal values. It is demonstrated that the penalty function can be used instead of defining the uncertainty bounds from the literature for each reaction in the mechanism, which can be a tremendous effort when dealing with large or insufficiently investigated mechanisms. The penalty term can also be used for further reduction of the mechanism by driving the reaction rates towards zero during the optimization. This approach is addressed in a greater detail in the final section of the thesis which shows the convergence behaviour of the integer-coded reduction, the real-coded optimization and reduction of the reduced mechanisms and the real-coded-optimization and reduction of the full mechanism. The convergence study shows that the real-coded optimization with the size-penalty function exhibits the fastest convergence towards one global optimum, which makes a good case for investigating and improving the real-coded reduction as a direct way to optimize and reduce the full mechanism at the same time. The GA-based reduction and optimization method has shown to be robust, flexible, and applicable to a range of operating conditions by using multiple criteria simultaneously.In dieser Arbeit wurde eine automatische Methode zur Reduktion und Optimierung von chemischen kinetischen Mechanismen unter spezifischen physikalischen oder thermodynamischen Bedingungen entwickelt und beschrieben. Die Reduktion des Mechanismus beruht auf dem genetischen Algorithmus (GA), der nach einer kleinstmöglichen Untermenge von Reaktionen aus dem detaillierten Mechanismus sucht, wĂ€hrend er die FĂ€higkeit des reduzierten Mechanismus noch bewahrt, die Gesamtchemie bei einem akzeptablen Fehler zu beschreiben. Die Genauigkeit des reduzierten Mechanismus wird durch Vergleich seiner Lösung mit der Lösung, die mit dem vollstĂ€ndigen Mechanismus unter den gleichen AnfĂ€ngsbedingungen und/oder physikalischen Bedingungen erhalten wird, bestimmt. FĂŒr die Reduktion werden nicht nur die chemische Genauigkeit und die GrĂ¶ĂŸe des Mechanismus berĂŒcksichtigt, sondern auch die Simulationszeit, die hilft, steife und langsam konvergierende Mechanismen zu vermeiden. Die (nachfolgende) Optimierungstechnik basiert auf einem genetischen Algorithmus, der darauf abzielt, neue Koeffizienten der Reaktionsgeschwindigkeiten zu finden, um die Genauigkeit die ĂŒblicherweise durch den vorhergehenden Reduktionsvorgang verringert wird, wiederherzustellen. Die Genauigkeit wird durch eine Zielfunktion definiert, die Bereiche vom Interesse abdeckt, in denen der reduzierte Mechanismus von dem ursprĂŒnglichen Mechanismus abweichen kann. Die Zielfunktion lenkt die Suche nach genaueren reduzierten Mechanismen, die fĂŒr einen bestimmten Satz von Betriebsbedingungen gĂŒltig sind. Die Leistung des Mechanismus wird fĂŒr Simulationen von homogenem Reaktor oder laminaren Flammen gegenĂŒber den Ergebnissen aus einer gegebenen Referenz bewertet. Ein zusĂ€tzlicher Term, der in der Zielfunktion eingefĂŒhrt wird, ist ein sogenannter Strafterm, der die Reaktionsgeschwindigkeiten wĂ€hrend der Optimierung beeinflusst. Mit dem Strafterm kann die Änderung der Reaktionsgeschwindigkeiten minimiert werden, sodass sie so nah wie möglich an ihren Startwerten gehalten werden. Es wird gezeigt, dass der Strafterm verwendet werden kann, anstatt die Unsicherheitsgrenzen aus der Literatur fĂŒr jede Reaktion im Mechanismus zu definieren. Der Strafterm kann auch zur weiteren Reduzierung des Mechanismus verwendet werden, indem die Reaktionsgeschwindigkeiten wĂ€hrend der Optimierung auf Null gestellt werden. Dieser Ansatz wird im letzten Abschnitt der Arbeit nĂ€her erlĂ€utert. Es wird das Konvergenzverhalten der ganzzahlig codierten Reduktion, der realcodierten Optimierung und Reduktion der reduzierten Mechanismen, sowie der realcodierten Optimierung und Reduktion des vollstĂ€ndigen Mechanismus analysiert. Die Konvergenzstudie zeigt, dass die realcodierte Optimierung mit dem Strafterm die schnellste Konvergenz zu einem globalen Optimum hat. Das bietet einige neue Möglichkeiten fĂŒr die Erforschung und Verbesserung der realcodierten Reduktion, als direkten Weg zur gleichzeitigen Optimierung und Reduzierung des vollen Mechanismus. Die GA-basierte Reduktions- und Optimierungsmethoden haben sich als robust, flexibel und anwendbar fĂŒr eine Reihe von Betriebsbedingungen erwiesen, indem gleichzeitig mehrere Kriterien betrachtet werden sollen
    • 

    corecore