23,894 research outputs found

    The adequacy of the present practice in dynamic aggregated modelling of wind farm systems

    Get PDF
    Large offshore wind farms are usually composed of several hundred individual wind turbines, each turbine having its own complex set of dynamics. The analysis of the dynamic interaction between wind turbine generators (WTG), interconnecting ac cables, and voltage source converter (VSC) based High Voltage DC (HVDC) system is difficult because of the complexity and the scale of the entire system. The detailed modelling and modal analysis of a representative wind farm system reveal the presence of several critical resonant modes within the system. Several of these modes have frequencies close to harmonics of the power system frequency with poor damping. From a computational perspective the aggregation of the physical model is necessary in order to reduce the degree of complexity to a practical level. This paper focuses on the present practices of the aggregation of the WTGs and the collection system, and their influence on the damping and frequency characteristics of the critical oscillatory modes. The effect of aggregation on the critical modes are discussed using modal analysis and dynamic simulation. The adequacy of aggregation method is discussed

    Singularity-sensitive gauge-based radar rainfall adjustment methods for urban hydrological applications

    Get PDF
    Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows

    Sacrificing Accuracy for Reduced Computation: Cascaded Inference Based on Softmax Confidence

    Full text link
    We study the tradeoff between computational effort and accuracy in a cascade of deep neural networks. During inference, early termination in the cascade is controlled by confidence levels derived directly from the softmax outputs of intermediate classifiers. The advantage of early termination is that classification is performed using less computation, thus adjusting the computational effort to the complexity of the input. Moreover, dynamic modification of confidence thresholds allow one to trade accuracy for computational effort without requiring retraining. Basing of early termination on softmax classifier outputs is justified by experimentation that demonstrates an almost linear relation between confidence levels in intermediate classifiers and accuracy. Our experimentation with architectures based on ResNet obtained the following results. (i) A speedup of 1.5 that sacrifices 1.4% accuracy with respect to the CIFAR-10 test set. (ii) A speedup of 1.19 that sacrifices 0.7% accuracy with respect to the CIFAR-100 test set. (iii) A speedup of 2.16 that sacrifices 1.4% accuracy with respect to the SVHN test set

    Bioethanol from Germinated Grains.

    Get PDF
    The most well-known way to produce bioethanol is by the enzymatic hydrolysis and fermentation of starch. In a new project “BioConcens” (2007) sponsored by DARCOF (DAnish Research Center for Organic Food and farming) one aim is to develop a combined ethanol and biogas production for use in organic farming using starch containing biomass. Natural enzymes from cereals will be used for hydrolysis of starch to glucose in accordance with technology in brewing technology. Commercial enzymes are often produced from gene-modified organisms and will therefore not be used in the suggested organic context or process. A preliminary study was performed in which grains of wheat, rye, and barley were germinated using traditional methods applied in malting for beer production. During malting the amylase enzymes present in the grain are activated (autoamylolytic effect). Three steps were applied in the malting process; steeping, germination, and drying of the grains. After malting the grains were milled and mixed with water to 13% DM, cooked at 57.5C for 2 hours (to activate the enzymes), and cooled to 30C before adding Bakers Yeast. The results of this study indicate that efficient hydrolysis of starch can be achieved by activation of autoamylolytic enzymes in cereal grains after a malting process. The ethanol yields obtained in the autoamylolytic hydrolysis was comparable (or slightly higher) to that of reference experiments using commercial enzymes (amylases). The highest ethanol yield was obtained with wheat (0.34 g/g DM grain), followed by barley (0.31 g/g DM grain), and rye (0.29 g/g DM grain)

    PCV33 THE IMPACT OF SWITCHING PATIENTS TO ROSUVASTATIN ON HEALTH-CARE EXPENDITURE AND PREVENTION OF CARDIOVASCULAR DISEASE: A COHORT STUDY

    Get PDF

    A trans10-18:1 enriched fraction from beef fed a barley grain-based diet induces lipogenic gene expression and reduces viability of HepG2 cells.

    Get PDF
    Beef fat is a natural source of trans (t) fatty acids, and is typically enriched with either t10-18:1 or t11-18:1. Little is known about the bioactivity of individual t-18:1 isomers, and the present study compared the effects of t9-18:1, cis (c)9-18:1 and trans (t)-18:1 fractions isolated from beef fat enriched with either t10-18:1 (HT10) or t11-18:1 (HT11). All 18:1 isomers resulted in reduced human liver (HepG2) cell viability relative to control. Both c9-18:1 and HT11were the least toxic, t9-18:1had dose response increased toxicity, and HT10 had the greatest toxicity (P<0.05). Incorporation of t18:1 isomers was 1.8-2.5 fold greater in triacylglycerol (TG) than phospholipids (PL), whereas Δ9 desaturation products were selectively incorporated into PL. Culturing HepG2 cells with t9-18:1 and HT10 increased (P<0.05) the Δ9 desaturation index (c9-16:1/16:0) compared to other fatty acid treatments. HT10 and t9-18:1 also increased expression of lipogenic genes (FAS, SCD1, HMGCR and SREBP2) compared to control (P<0.05), whereas c9-18:1 and HT11 did not affect the expression of these genes. Our results suggest effects of HT11 and c9-18:1 were similar to BSA control, whereas HT10 and t-9 18:1 (i.e. the predominant trans fatty acid isomer found in partially hydrogenated vegetable oils) were more cytotoxic and led to greater expression of lipogenic genes

    The Complexity of Graph-Based Reductions for Reachability in Markov Decision Processes

    Full text link
    We study the never-worse relation (NWR) for Markov decision processes with an infinite-horizon reachability objective. A state q is never worse than a state p if the maximal probability of reaching the target set of states from p is at most the same value from q, regard- less of the probabilities labelling the transitions. Extremal-probability states, end components, and essential states are all special cases of the equivalence relation induced by the NWR. Using the NWR, states in the same equivalence class can be collapsed. Then, actions leading to sub- optimal states can be removed. We show the natural decision problem associated to computing the NWR is coNP-complete. Finally, we ex- tend a previously known incomplete polynomial-time iterative algorithm to under-approximate the NWR

    Preference for Deliberation and Perceived Usefulness of Standard- and Narrative-Style Leaflet Designs: Implications for Equitable Cancer-Screening Communication

    Get PDF
    BACKGROUND: In the UK, cancer-screening invitations are mailed with information styled in a standard, didactic way to allow for informed choice. Information processing theory suggests this "standard style" could be more appealing to people who prefer deliberative thinking. People less likely to engage in deliberative thinking may be disenfranchised by the design of current standard-style information. PURPOSE: To examine the distribution of preference for deliberative thinking across demographic groups (Study 1) and explore associations between preference for deliberative thinking and perceived usefulness of standard- and narrative-style screening information (Study 2). METHODS: In Study 1, adults aged 45-59 (n = 4,241) were mailed a questionnaire via primary care assessing preference for deliberative thinking and demographic characteristics. In Study 2, a separate cohort of adults aged 45-59 (n = 2,058) were mailed standard- and narrative-style leaflets and a questionnaire assessing demographic characteristics, preference for deliberative thinking, and perceived leaflet usefulness. Data were analyzed using multiple regression. RESULTS: In Study 1 (n = 1,783) and Study 2 (n = 650), having lower socioeconomic status, being a women, and being of nonwhite ethnicity was associated with lower preference for deliberative thinking. In Study 2, the standard-style leaflet was perceived as less useful among participants with lower preference for deliberative thinking, while perceived usefulness of the narrative-style leaflet did not differ by preference for deliberative thinking. CONCLUSIONS: Information leaflets using a standard style may disadvantage women and those experiencing greater socioeconomic deprivation. More work is required to identify design styles that have a greater appeal for people with low preference for deliberative thinking

    Collapse of a Bose gas: kinetic approach

    Full text link
    We have analytically explored temperature dependence of critical number of particles for the collapse of a harmonically trapped attractively interacting Bose gas below the condensation point by introducing a kinetic approach within the Hartree-Fock approximation. The temperature dependence obtained by this easy approach is consisted with that obtained from the scaling theory.Comment: Brief Report, 4 pages, 1 figure, Accepted in Pramana-Journal of Physic
    • …
    corecore