277,563 research outputs found

    Artificial neural networks and physical modeling for determination of baseline consumption of CHP plants

    Get PDF
    An effective modeling technique is proposed for determining baseline energy consumption in the industry. A CHP plant is considered in the study that was subjected to a retrofit, which consisted of the implementation of some energy-saving measures. This study aims to recreate the post-retrofit energy consumption and production of the system in case it would be operating in its past configuration (before retrofit) i.e., the current consumption and production in the event that no energy-saving measures had been implemented. Two different modeling methodologies are applied to the CHP plant: thermodynamic modeling and artificial neural networks (ANN). Satisfactory results are obtained with both modeling techniques. Acceptable accuracy levels of prediction are detected, confirming good capability of the models for predicting plant behavior and their suitability for baseline energy consumption determining purposes. High level of robustness is observed for ANN against uncertainty affecting measured values of variables used as input in the models. The study demonstrates ANN great potential for assessing baseline consumption in energyintensive industry. Application of ANN technique would also help to overcome the limited availability of on-shelf thermodynamic software for modeling all specific typologies of existing industrial processes

    Dynamic modeling of mean-reverting spreads for statistical arbitrage

    Full text link
    Statistical arbitrage strategies, such as pairs trading and its generalizations, rely on the construction of mean-reverting spreads enjoying a certain degree of predictability. Gaussian linear state-space processes have recently been proposed as a model for such spreads under the assumption that the observed process is a noisy realization of some hidden states. Real-time estimation of the unobserved spread process can reveal temporary market inefficiencies which can then be exploited to generate excess returns. Building on previous work, we embrace the state-space framework for modeling spread processes and extend this methodology along three different directions. First, we introduce time-dependency in the model parameters, which allows for quick adaptation to changes in the data generating process. Second, we provide an on-line estimation algorithm that can be constantly run in real-time. Being computationally fast, the algorithm is particularly suitable for building aggressive trading strategies based on high-frequency data and may be used as a monitoring device for mean-reversion. Finally, our framework naturally provides informative uncertainty measures of all the estimated parameters. Experimental results based on Monte Carlo simulations and historical equity data are discussed, including a co-integration relationship involving two exchange-traded funds.Comment: 34 pages, 6 figures. Submitte

    On the dialog between experimentalist and modeler in catchment hydrology

    Get PDF
    The dialog between experimentalist and modeler in catchment hydrology has been minimal to date. The experimentalist often has a highly detailed yet highly qualitative understanding of dominant runoff processes—thus there is often much more information content on the catchment than we use for calibration of a model. While modelers often appreciate the need for 'hard data' for the model calibration process, there has been little thought given to how modelers might access this 'soft' or process knowledge. We present a new method where soft data (i.e., qualitative knowledge from the experimentalist that cannot be used directly as exact numbers) are made useful through fuzzy measures of model-simulation and parameter-value acceptability. We developed a three-box lumped conceptual model for the Maimai catchment in New Zealand, a particularly well-studied process-hydrological research catchment. The boxes represent the key hydrological reservoirs that are known to have distinct groundwater dynamics, isotopic composition and solute chemistry. The model was calibrated against hard data (runoff and groundwater-levels) as well as a number of criteria derived from the soft data (e.g. percent new water, reservoir volume, etc). We achieved very good fits for the three-box model when optimizing the parameter values with only runoff (Reff=0.93). However, parameter sets obtained in this way showed in general a poor goodness-of-fit for other criteria such as the simulated new-water contributions to peak runoff. Inclusion of soft-data criteria in the model calibration process resulted in lower Reff-values (around 0.84 when including all criteria) but led to better overall performance, as interpreted by the experimentalist’s view of catchment runoff dynamics. The model performance with respect to soft data (like, for instance, the new water ratio) increased significantly and parameter uncertainty was reduced by 60% on average with the introduction of the soft data multi-criteria calibration. We argue that accepting lower model efficiencies for runoff is 'worth it' if one can develop a more 'real' model of catchment behavior. The use of soft data is an approach to formalize this exchange between experimentalist and modeler and to more fully utilize the information content from experimental catchments

    Ensemble evaluation of hydrological model hypotheses

    Get PDF
    It is demonstrated for the first time how model parameter, structural and data uncertainties can be accounted for explicitly and simultaneously within the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. As an example application, 72 variants of a single soil moisture accounting store are tested as simplified hypotheses of runoff generation at six experimental grassland field-scale lysimeters through model rejection and a novel diagnostic scheme. The fields, designed as replicates, exhibit different hydrological behaviors which yield different model performances. For fields with low initial discharge levels at the beginning of events, the conceptual stores considered reach their limit of applicability. Conversely, one of the fields yielding more discharge than the others, but having larger data gaps, allows for greater flexibility in the choice of model structures. As a model learning exercise, the study points to a “leaking” of the fields not evident from previous field experiments. It is discussed how understanding observational uncertainties and incorporating these into model diagnostics can help appreciate the scale of model structural error

    A fundamental theorem of asset pricing for continuous time large financial markets in a two filtration setting

    Full text link
    We present a version of the fundamental theorem of asset pricing (FTAP) for continuous time large financial markets with two filtrations in an LpL^p-setting for 1≀p<∞ 1 \leq p < \infty. This extends the results of Yuri Kabanov and Christophe Stricker \cite{KS:06} to continuous time and to a large financial market setting, however, still preserving the simplicity of the discrete time setting. On the other hand it generalizes Stricker's LpL^p-version of FTAP \cite{S:90} towards a setting with two filtrations. We do neither assume that price processes are semi-martigales, (and it does not follow due to trading with respect to the \emph{smaller} filtration) nor that price processes have any path properties, neither any other particular property of the two filtrations in question, nor admissibility of portfolio wealth processes, but we rather go for a completely general (and realistic) result, where trading strategies are just predictable with respect to a smaller filtration than the one generated by the price processes. Applications range from modeling trading with delayed information, trading on different time grids, dealing with inaccurate price information, and randomization approaches to uncertainty

    Top Quark Physics at the Tevatron

    Full text link
    We review the field of top-quark physics with an emphasis on experimental techniques. The role of the top quark in the Standard Model of particle physics is summarized and the basic phenomenology of top-quark production and decay is introduced. We discuss how contributions from physics beyond the Standard Model could affect top-quark properties or event samples. The many measurements made at the Fermilab Tevatron, which test the Standard Model predictions or probe for direct evidence of new physics using the top-quark event samples, are reviewed here.Comment: 50 pages, 17 figures, 2 tables; version accepted by Review of Modern Physic

    Addressing Uncertainty in TMDLS: Short Course at Arkansas Water Resources Center 2001 Annual Conference

    Get PDF
    Management of a critical natural resource like water requires information on the status of that resource. The US Environmental Protection Agency (EPA) reported in the 1998 National Water Quality Inventory that more than 291,000 miles of assessed rivers and streams and 5 million acres of lakes do not meet State water quality standards. This inventory represents a compilation of State assessments of 840,000 miles of rivers and 17.4 million acres of lakes; a 22 percent increase in river miles and 4 percent increase in lake acres over their 1996 reports. Siltation, bacteria, nutrients and metals were the leading pollutants of impaired waters, according to EPA. The sources of these pollutants were presumed to be runoff from agricultural lands and urban areas. EPA suggests that the majority of Americans-over 218 million-live within ten miles of a polluted waterbody. This seems to contradict the recent proclamations of the success of the Clean Water Act, the Nation\u27s water pollution control law. EPA also claims that, while water quality is still threatened in the US, the amount of water safe for fishing and swimming has doubled since 1972, and that the number of people served by sewage treatment plants has more than doubled
    • 

    corecore