136,305 research outputs found

    Modular multilevel converter losses model for HVdc applications

    Get PDF
    Multi-terminal high voltage dc (HVdc) grids can eventually became a feasible solution to transport energy to remote and/ or distant areas and its exploitation depend, among other things, on the performance of the converter terminals. Therefore, to optimize the power transmission strategy along such a grid, it is necessary to recognize the efficiency of all the converters in all points of operation, namely with the different load conditions. In this vision, the aim of this work is to provide the methodology to model the modular multilevel converter (MMC) efficiency by means of a mathematical expression that can describe, over a broad range of active and reactive power flow combinations, the power losses generated by the semiconductors. According to the presented methodology, a polynomial-based model with a reduced number of coefficients is deducted, in such a way that can be directly used for optimal power flow (OPF) studies. The accuracy of the proposed model is characterized by an absolute relative error, at the worst scenario, approximately equal to 3%.Postprint (author's final draft

    A Bayesian palaeoenvironmental transfer function model for acidified lakes

    Get PDF
    A Bayesian approach to palaeoecological environmental reconstruction deriving from the unimodal responses generally exhibited by organisms to an environmental gradient is described. The approach uses Bayesian model selection to calculate a collection of probability-weighted, species-specific response curves (SRCs) for each taxon within a training set, with an explicit treatment for zero abundances. These SRCs are used to reconstruct the environmental variable from sub-fossilised assemblages. The approach enables a substantial increase in computational efficiency (several orders of magnitude) over existing Bayesian methodologies. The model is developed from the Surface Water Acidification Programme (SWAP) training set and is demonstrated to exhibit comparable predictive power to existing Weighted Averaging and Maximum Likelihood methodologies, though with improvements in bias; the additional explanatory power of the Bayesian approach lies in an explicit calculation of uncertainty for each individual reconstruction. The model is applied to reconstruct the Holocene acidification history of the Round Loch of Glenhead, including a reconstruction of recent recovery derived from sediment trap data.The Bayesian reconstructions display similar trends to conventional (Weighted Averaging Partial Least Squares) reconstructions but provide a better reconstruction of extreme pH and are more sensitive to small changes in diatom assemblages. The validity of the posteriors as an apparently meaningful representation of assemblage-specific uncertainty and the high computational efficiency of the approach open up the possibility of highly constrained multiproxy reconstructions

    Adaptive time-stepping for incompressible flow. Part II: Navier-Stokes equations

    Get PDF
    We outline a new class of robust and efficient methods for solving the Navier- Stokes equations. We describe a general solution strategy that has two basic building blocks: an implicit time integrator using a stabilized trapezoid rule with an explicit Adams-Bashforth method for error control, and a robust Krylov subspace solver for the spatially discretized system. We present numerical experiments illustrating the potential of our approach. © 2010 Society for Industrial and Applied Mathematics

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    Error control in simplification before generation algorithms for symbolic analysis of large analogue circuits

    Get PDF
    Circuit reduction is a fundamental first step in addressing the symbolic analysis of large analogue circuits. A new algorithm for simplification before generation is presented which is very efficient in terms of speed and the amount of circuit reduction, and solves the accuracy problems of previously reported approaches

    Strong experimental guarantees in ultrafast quantum random number generation

    Get PDF
    We describe a methodology and standard of proof for experimental claims of quantum random number generation (QRNG), analogous to well-established methods from precision measurement. For appropriately constructed physical implementations, lower bounds on the quantum contribution to the average min-entropy can be derived from measurements on the QRNG output. Given these bounds, randomness extractors allow generation of nearly perfect "{\epsilon}-random" bit streams. An analysis of experimental uncertainties then gives experimentally derived confidence levels on the {\epsilon} randomness of these sequences. We demonstrate the methodology by application to phase-diffusion QRNG, driven by spontaneous emission as a trusted randomness source. All other factors, including classical phase noise, amplitude fluctuations, digitization errors and correlations due to finite detection bandwidth, are treated with paranoid caution, i.e., assuming the worst possible behaviors consistent with observations. A data-constrained numerical optimization of the distribution of untrusted parameters is used to lower bound the average min-entropy. Under this paranoid analysis, the QRNG remains efficient, generating at least 2.3 quantum random bits per symbol with 8-bit digitization and at least 0.83 quantum random bits per symbol with binary digitization, at a confidence level of 0.99993. The result demonstrates ultrafast QRNG with strong experimental guarantees.Comment: 11 pages, 9 figure
    corecore