91 research outputs found

    Generalized Force Approach to Point-to-Point Ionospheric Ray Tracing and Systematic Identification of High and Low Rays

    Get PDF
    Post-print (lokagerð höfundar)A variant of the direct optimization method for point-to-point ionospheric ray tracing is presented. The method is well suited for applications where the launch direction of the radio wave ray is unknown, but the position of the receiver is specified instead. Iterative transformation of a candidate path to the sought-for ray is guided by a generalized force, where the definition of the force depends on the ray type. For high rays, the negative gradient of the optical path functional is used. For low rays, the transformation of the gradient is applied, converting the neighbourhood of a saddle point to that of a local minimum. Knowledge about the character of the rays is used to establish a scheme for systematic identification of all relevant rays between given points, without the need to provide an accurate initial estimate for each solution. Various applications of the method to isotropic ionosphere demonstrate its ability to resolve complex ray configurations including three-dimensional propagation and multi-path propagation where rays are close in the launch direction. Results of the application of the method to ray tracing between Khabarovsk and Tory show good quantitative agreement with the measured oblique ionograms.Icelandic Research Fund (Grant No. 184949-052)Peer Reviewe

    Simulation, optimization and instrumentation of agricultural biogas plants

    Get PDF
    During the last two decades, the production of renewable energy by anaerobic digestion (AD) in biogas plants has become increasingly popular due to its applicability to a great variety of organic material from energy crops and animal waste to the organic fraction of Municipal Solid Waste (MSW), and to the relative simplicity of AD plant designs. Thus, a whole new biogas market emerged in Europe, which is strongly supported by European and national funding and remuneration schemes. Nevertheless, stable and efficient operation and control of biogas plants can be challenging, due to the high complexity of the biochemical AD process, varying substrate quality and a lack of reliable online instrumentation. In addition, governmental support for biogas plants will decrease in the long run and the substrate market will become highly competitive. The principal aim of the research presented in this thesis is to achieve a substantial improvement in the operation of biogas plants. At first, a methodology for substrate inflow optimization of full-scale biogas plants is developed based on commonly measured process variables and using dynamic simulation models as well as computational intelligence (CI) methods. This methodology which is appliquable to a broad range of different biogas plants is then followed by an evaluation of existing online instrumentation for biogas plants and the development of a novel UV/vis spectroscopic online measurement system for volatile fatty acids. This new measurement system, which uses powerful machine learning techniques, provides a substantial improvement in online process monitoring for biogas plants. The methodologies developed and results achieved in the areas of simulation and optimization were validated at a full-scale agricultural biogas plant showing that global optimization of the substrate inflow based on dynamic simulation models is able to improve the yearly profit of a biogas plant by up to 70%. Furthermore, the validation of the newly developed online measurement for VFA concentration at an industrial biogas plant showed that a measurement accuracy of 88% is possible using UV/vis spectroscopic probes

    Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

    Get PDF
    Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    Methods and application of deep-time thermochronology: Insights from slowly-cooled terranes of Mongolia and the North American craton

    Get PDF
    Continental interiors are an underappreciated facet of plate tectonics due to the perception that they are often static over long timescales. Salient tectonic margins receive more attention, owing to their comparatively dynamic state during the creation and destruction of continents and ocean basins. I utilize low-temperature (U-Th)/He and 40Ar/39Ar thermochronology to address questions regarding the spatial and temporal thermal evolution, and by proxy, the exhumation and burial histories of these slowly-cooled terranes through deep time. Chapter One is focused on the topographic evolution of the Hangay Mountains of central Mongolia, where apatite (U- Th)/He data and thermal models suggest that the post-orogenic landscape experienced rapid relief loss of a few hundred meters in the mid-Mesozoic. The Hangay are now characterized by a relict landscape that has undergone slow exhumation on the order of ~10 m/Ma since the Cretaceous (~100 Ma), analogous to other old landscapes such as the Appalachians. The central Mongolian landscape remains in a state of topographic disequilibrium, while modest surface uplift since the Oligocene and recent glaciation have had little effect on erosion rates due to the fact that there has been minor tectonism and a very dry climate during the Cenozoic.Chapter Two confronts the problem of dispersed apatite (U-Th)/He cooling ages that often afflict slowly-cooled terranes, such as the Hangay Mountains. Conventional total-gas analysis offers little explanation or remedy for He age scatter that has been typically attributed to many factors, such as isotopic zonation, crystal lattice defects, and radiation damage. Unlike conventional analysis, the continuous ramped heating (CRH) technique exploits incremental 4He release during a continuous, controlled heating rate under static extraction line conditions. This approach allows the measurement of the cumulative gas released from apatite grains and assessment of the characteristic sigmoidal release curve shape as a means to distinguish between expected (radiogenic) and anomalous volume-diffusion behavior. Screening results for multiple apatite suites show that the CRH method can discriminate between the simple, smooth release of apatites exhibiting expected behavior and well-replicated ages, and grains that do not replicate well with more complicated 4He release patterns – and offers a means to correct these ages.Chapter Three is focused on understanding the assumed long-term stability of the southern Canadian Shield. Craton stability over billion-year timescales is often inferred due to the lack of geologic records to suggest otherwise. For the Proterozoic (2.5-0.54 Ga) there is little or no intermediate temperature thermal-history information for many locations, however K-feldspar 40Ar/39Ar MDD data and modeled thermal histories linked to published high- and low- temperature data from the Canadian Shield suggest the southern craton experienced unroofing delayed until ~1 Ga, coeval with the formation of the supercontinent Rodinia. K-feldspar data suggest a prolonged period of near-isothermal cooling of \u3c0.5°C/Ma in the late Proterozoic where rocks were positioned at cratonic depths in the middle crust for up to ~500 million years at temperatures of ~150-200°C and subsequently exhumed to the surface in the Neoproterozoic. Thermal history solutions and geophysical evidence of underplating and crustal thickening at the Mid-Continental Rift and adjacent regions suggest uplift and a previously unrecognized phase of cratonic unroofing that began in the Neoproterozoic, which ultimately contributed to the development of the Great Unconformity of North America

    Computational Intelligence for Modeling, Control, Optimization, Forecasting and Diagnostics in Photovoltaic Applications

    Get PDF
    This book is a Special Issue Reprint edited by Prof. Massimo Vitelli and Dr. Luigi Costanzo. It contains original research articles covering, but not limited to, the following topics: maximum power point tracking techniques; forecasting techniques; sizing and optimization of PV components and systems; PV modeling; reconfiguration algorithms; fault diagnosis; mismatching detection; decision processes for grid operators
    • …
    corecore