14,814 research outputs found

    The contribution of grass and clover root turnover to N leaching

    Get PDF
    Sources of inorganic and organic N leaching from grass-clover mixtures at field sites in Denmark, Germany and Iceland were investigated. Grass or clover was labelled with 15N-urea four times (autumn 2007, spring, summer and autumn 2008) prior to the leaching season in autumn and winter 2008. Soil water was sampled at 30 cm depth and analyzed for 15N-enrichment of dissolved inorganic N (DIN) and dissolved organic N (DON). Most 15N was recovered in DON for both labelled grass and clover at all sites. At the Danish site, grass and clover contributed more to the DON pool than the DIN whereas the opposite was observed at the German and Icelandic sites. The results show that both clover and grass contribute directly to N leaching from the root zone in mixtures, and that clover contribution is higher than grass. Furthermore, the present study indicates that roots active in the growth season prior to the drainage period contribute more to N leaching than roots active in the growth season the previous year, which is consistent with estimates of root longevity at the three sites

    Parallel Batch-Dynamic Graph Connectivity

    Full text link
    In this paper, we study batch parallel algorithms for the dynamic connectivity problem, a fundamental problem that has received considerable attention in the sequential setting. The most well known sequential algorithm for dynamic connectivity is the elegant level-set algorithm of Holm, de Lichtenberg and Thorup (HDT), which achieves O(log2n)O(\log^2 n) amortized time per edge insertion or deletion, and O(logn/loglogn)O(\log n / \log\log n) time per query. We design a parallel batch-dynamic connectivity algorithm that is work-efficient with respect to the HDT algorithm for small batch sizes, and is asymptotically faster when the average batch size is sufficiently large. Given a sequence of batched updates, where Δ\Delta is the average batch size of all deletions, our algorithm achieves O(lognlog(1+n/Δ))O(\log n \log(1 + n / \Delta)) expected amortized work per edge insertion and deletion and O(log3n)O(\log^3 n) depth w.h.p. Our algorithm answers a batch of kk connectivity queries in O(klog(1+n/k))O(k \log(1 + n/k)) expected work and O(logn)O(\log n) depth w.h.p. To the best of our knowledge, our algorithm is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 201

    Simulation of transition dynamics to high confinement in fusion plasmas

    Get PDF
    The transition dynamics from the low (L) to the high (H) confinement mode in magnetically confined plasmas is investigated using a first-principles four-field fluid model. Numerical results are in close agreement with measurements from the Experimental Advanced Superconducting Tokamak - EAST. Particularly, the slow transition with an intermediate dithering phase is well reproduced by the numerical solutions. Additionally, the model reproduces the experimentally determined L-H transition power threshold scaling that the ion power threshold increases with increasing particle density. The results hold promise for developing predictive models of the transition, essential for understanding and optimizing future fusion power reactors

    Diabetes Capabilities for the Healthcare Workforce Identified via a 3-Staged Modified Delphi Technique.

    Full text link
    Consumers access health professionals with varying levels of diabetes-specific knowledge and training, often resulting in conflicting advice. Conflicting health messages lead to consumer disengagement. The study aimed to identify capabilities required by health professionals to deliver diabetes education and care to develop a national consensus capability-based framework to guide their training. A 3-staged modified Delphi technique was used to gain agreement from a purposefully recruited panel of Australian diabetes experts from various disciplines and work settings. The Delphi technique consisted of (Stage I) a semi-structured consultation group and pre-Delphi pilot, (Stage II) a 2-phased online Delphi survey, and (Stage III) a semi-structured focus group and appraisal by health professional regulatory and training organisations. Descriptive statistics and central tendency measures calculated determined quantitative data characteristics and consensus. Content analysis using emergent coding was used for qualitative content. Eighty-four diabetes experts were recruited from nursing and midwifery (n = 60 [71%]), allied health (n = 17 [20%]), and pharmacy (n = 7 [9%]) disciplines. Participant responses identified 7 health professional practice levels requiring differences in diabetes training, 9 capability areas to support care, and 2 to 16 statements attained consensus for each capability-259 in total. Additionally, workforce solutions were identified to expand capacity for diabetes care. The rigorous consultation process led to the design and validation of a Capability Framework for Diabetes Care that addresses workforce enablers identified by the Australian National Diabetes Strategy. It recognises diversity, creating shared understandings of diabetes across health professional disciplines. The findings will inform diabetes policy, practice, education, and research

    Light bullets in quadratic media with normal dispersion at the second harmonic

    Full text link
    Stable two- and three-dimensional spatiotemporal solitons (STSs) in second-harmonic-generating media are found in the case of normal dispersion at the second harmonic (SH). This result, surprising from the theoretical viewpoint, opens a way for experimental realization of STSs. An analytical estimate for the existence of STSs is derived, and full results, including a complete stability diagram, are obtained in a numerical form. STSs withstand not only the normal SH dispersion, but also finite walk-off between the harmonics, and readily self-trap from a Gaussian pulse launched at the fundamental frequency.Comment: 4 pages, 5 figures, accepted to Phys. Rev. Let

    The Development of a Methodology to Determine the Relationship in Grip Size and Pressure to Racket Head Speed in a Tennis Forehand Stroke

    Full text link
    © 2016 The Authors. Published by Elsevier Ltd. This study developed a methodology to examine the effects of grip size and grip firmness on the kinematic contribution of angular velocity (KCAV) to the generation of racket head speed during a topspin tennis forehand. The KCAV is subdivided into kinematic contribution of joint angular velocity and kinematic contribution of the body segments in the upper trunk translational and angular velocities. Two Babolat Pure Storm GT rackets, with grip sizes 2 and 4 respectively, were used with Tekscan 9811E pressure sensors applied to the handles to examine pressure distribution during the stroke. Upper body kinematic data taken from the racket arm and trunk were obtained by means of a Vicon motion capture system. One elite male tennis player was recruited. Fifty topspin forehand strokes per grip at two nominal grip pressures were performed in a laboratory environment with balls being tossed towards the player and struck on the bounce towards a target on a net in as consistent a way as practically achievable. Processing of the results showed that the firm grip condition led to a significant (p<0.001) increase in average racket head speed compared to a normal grip condition. The normal gripping condition resulted in a significant (p<0.001) increase in average racket head speed for grip size 2 compared to grip size 4. A trend in negative linear relationships was found between upper trunk and shoulder joint in KCAV across conditions. Using the smaller grip also led to a trend in negative linear relationship between shoulder joint and wrist joint in KCAV across grip conditions. Grip pressure for grip size 2 showed the same pattern across gripping conditions. From 50-75% of completion in forward swing, the pressure difference due to grip firmness decreased. This feasibility study managed to quantify the KCAV while performing a topspin forehand, with respect to changing of grip size and grip pressure in an elite male tennis player for the first time

    Une méthodologie générale de comparaison de modèles d'estimation régionale de crue

    Get PDF
    L'estimation du débit QT de période de retour T en un site est généralement effectuée par ajustement d'une distribution statistique aux données de débit maximum annuel de ce site. Cependant, l'estimation en un site où l'on dispose de peu ou d'aucune données hydrologiques doit être effectuée par des méthodes régionales qui consistent à utiliser l'information existante en des sites hydrologiquement semblables au site cible. Cette procédure est effectuée en deux étapes: (a) détermination des sites hydrologiquemcnt semblables(b) estimation régionalePour un découpage donné (étape a), nous proposons trois approches méthodologiques pour comparer les différentes méthodes d'estimation régionale. Ces approches sont décrites en détail dans ce travail. Plus particulièrement il s'agit de- simulation par la méthode du bootstrap - analyse de régression ou Bayes empirique - méthode bayésienne hiérarchiqueEstimation of design flows with a given return period is a common problem in hydrologic practice. At sites where data have been recorded during a number of years, such an estimation can be accomplished by fitting a statistical distribution to the series of annual maximum floods and then computing the (1-1/T) -quantile in the estimated distribution. However, frequently there are no, or only few, data available at the site of interest, and flood estimation must then be based on regional information. In general, regional flood frequency analysis involves two major steps:- determination of a set of gauging stations that are assumed to contain information pertinent to the site of interest. This is referred to as delineation of homogeneous regions.- estimation of the design flood at the target site based on information from the sites ofthe homogeneous region.The merits of regional flood frequency analysis, at ungauged sites as well as at sites where some local information is available, are increasingly being acknowledged, and many research papers have addressed the issue. New methods for delitneating regions and for estimating floods based on regional information have been proposed in the last decade, but scientists tend to focus on the development of new techniques rather than on testing existing ones. The aim ofthis paper is to suggest methodologies for comparing different regional estimation alternatives.The concept of homogeneous regions has been employed for a long time in hydrology, but a rigorous detinition of it has never been given. Usually, the homogeneity concerns dimensionless statistical characteristics of hydrological variables such as the coefficient of variation (Cv) and the coefficient of skewness (Cs) of annual flood series. A homogeneous region can then be thought of as a collection of stations with flood series whose statistical properties, except forscale, are not significantly different from the regional mean values. Tests based on L-moments are at present much applied for validating the homogeneity of a given region. Early approaches to regional flood frequency analysis were based on geographical regions, but recent tendencies are to deline homogeneous regions from the similarity of basins in the space of catchment characteristics which are related to hydrologic characteristics. Cluster analysis can be used to group similar sites, but has the disadvantage that a site in the vicinity ofthe cluster border may be closer to sites in other clusters than to those ofits ovm group. Burn (1990a, b) has recently suggested a method where each site has its owm homogeneous region (or region of influence) in which it is located at the centre of gravity.Once a homogeneous region has been delineated, a regional estimation method must be selected. The index flood method, proposed by Dalrymple (1960), and the direct regression method are among the most commonly used procedures. Cunnane (1988) provides an overview of several other methods. The general performance of a regional estimation method depends on the amount of regional information (hydrological as well as physiographical and climatic), and the size and homogeneity of the region considered relevant to the target site. Being strongly data-dependent, comparisons of regional models will be valid on a local scale only. Hence, one cannot expect to reach a general conclusion regarding the relative performance of different models, although some insight may be gained from case studies.Here, we present methodologies for comparing regional flood frequency procedures (combination of homogeneous regions and estimation methods) for ungauged sites. Hydrological, physiographical and climatic data are assumed to be available at a large number of sites, because a comparison of regional models must be based on real data. The premises of these methodologies are that at each gauged site in the collection of stations considered, one can obtain an unbiased atsite estimate of a given flood quantile, and that the variance of this estimate is known. Regional estimators, obtained by ignoring the hydrological data at the target site, are then compared to the at-site estimate. Three difrerent methodologies are considered in this study:A) Bootstrap simulation of hydrologic dataIn order to preserve spatial correlation of hydrologic data (which may have an important impact on regional flood frequency procedures), we suggest performing bootstrap simulation of vectors rather than scalar values. Each vector corresponds to a year for which data are available at one or more sites in the considered selection of stations; the elements ofthe vectors are the different sites. For a given generated data scenario, an at-site estimate and a regional estimate at each site considered can be calculated. As a performance index for a given regional model, one can use, for example, the average (over sites and bootstrap scenarios) relative deviation ofthe regional estimator from the at-site estimator.B) Regression analysisThe key idea in this methodology is to perform a regression analysis with a regional estimator as an explanatory variable and the unknown quantile, estimated by the at-site method, as the dependent variable. It is reasonable to assume a linear relation between the true quantiles and the regional estimators. The estimated regression coeflicients express the systematic error, or bias, of a given regional procedure, and the model error, estimated for instance by the method of moments, is a measure of its variance. It is preferable that the bias and the variance be as small as possible, suggesting that these quantities be used to order different regional procedures.C) Hierarchical Bayes analysisThe regression method employed in (B) can also be regarded as the resultfrom an empirical Bayes analysis in which point estimates of regression coeflicients and model error are obtained. For several reasons, it may be advantageous to proceed with a complete Bayesian analysis in which bias and model error are considered as uncertain quantities, described by a non-informative prior distribution. Combination of the prior distribution and the likelihood function yields through Bayes, theorem the posterior distribution of bias and model error. In order to compare different regional models, one can then calculate for example the mean or the mode of this distribution and use these values as perfonnance indices, or one can compute the posterior loss

    Experiments on Column Base Stiffness of Long-Span Cold-Formed Steel Portal Frames Composed of Double Channels

    Get PDF
    Cold-formed steel haunched portal frames are popular structures in industrial and housing applications. They are mostly used as sheds, garages, and shelters, and are common in rural areas. Cold-formed steel portal frames with spans of up to 30m (100 ft) are now being constructed in Australia. As these large structures are fairly new to the market, there is limited data on their feasibility and design recommendations. An experimental program was carried out on a series of portal frame systems composed of back-to-back channels for the columns, rafters, and knee braces. The system consisted of three frames connected in parallel with purlins to simulate a free standing structure, with an approximate span of 14 m (46 ft), column height of 5.3 m (17 ft), and apex height of 7 m (23 ft). Several configurations were tested including variations in the knee connection, sleeve stiffeners in the columns and rafters, and loading of either vertical or combined horizontal and vertical loads. Deflections were recorded at various locations to measure global and local movements of the structural members, as well as column base reactions and base rotations. It was determined that the column bases are semi-rigid and further column base connection tests were completed to quantify column base connection stiffness for bending about the column major and minor axes, as well as twist. Results of the column base connection stiffness are presented as well as the implications for frame design

    Bayesian optimization of the PC algorithm for learning Gaussian Bayesian networks

    Full text link
    The PC algorithm is a popular method for learning the structure of Gaussian Bayesian networks. It carries out statistical tests to determine absent edges in the network. It is hence governed by two parameters: (i) The type of test, and (ii) its significance level. These parameters are usually set to values recommended by an expert. Nevertheless, such an approach can suffer from human bias, leading to suboptimal reconstruction results. In this paper we consider a more principled approach for choosing these parameters in an automatic way. For this we optimize a reconstruction score evaluated on a set of different Gaussian Bayesian networks. This objective is expensive to evaluate and lacks a closed-form expression, which means that Bayesian optimization (BO) is a natural choice. BO methods use a model to guide the search and are hence able to exploit smoothness properties of the objective surface. We show that the parameters found by a BO method outperform those found by a random search strategy and the expert recommendation. Importantly, we have found that an often overlooked statistical test provides the best over-all reconstruction results
    corecore