1,914 research outputs found

    Optimal Probabilistic Forecasts for Counts

    Get PDF
    Optimal probabilistic forecasts of integer-valued random variables are derived. The optimality is achieved by estimating the forecast distribution nonparametrically over a given broad model class and proving asymptotic efficiency in that setting. The ideas are demonstrated within the context of the integer autoregressive class of models, which is a suitable class for any count data that can be interpreted as a queue, stock, birth and death process or branching process. The theoretical proofs of asymptotic optimality are supplemented by simulation results which demonstrate the overall superiority of the nonparametric method relative to a misspecified parametric maximum likelihood estimator, in large but .nite samples. The method is applied to counts of wage claim benefits, stock market iceberg orders and civilian deaths in Iraq, with bootstrap methods used to quantify sampling variation in the estimated forecast distributions.Nonparametric Inference; Asymptotic Efficiency; Count Time Series; INAR Model Class; Bootstrap Distributions; Iceberg Stock Market Orders.

    A Note on the Parallelogram Method for Computing the On-Level Premium

    Get PDF
    This paper discusses the differences appearing in the descriptions of the parallelogram method for the determination of earned premium at current rate levels given by McClenahan (1996) and Brown and Gottlieb (2001). It observes that the former is consistent with the method of extending exposures while the latter is not. An illustration is provided. This paper also discusses two other approaches to the determination of the earned premium

    A study of factors affecting the efficiency of milking operations.

    Get PDF
    End of Project ReportWith a mid-level milking system the milking time was reduced significantly when the teat end vacuum was increased • Vacuum losses were lower and milking time was shorter with simultaneous pulsation than with alternate pulsation • Milk yield was not affected by the magnitude of teat end vacuum. • Both the mean flowrate and peak flowrate increased when the teat end vacuum was increased. • New milking plants and conversions should have 16 mm bore long milk tubes (LMT) and 16 mm bore entries in the milk pipeline • The omission of udder washing as a pre-milking preparation procedure did not influence milking characteristics. • TBC and E. coli were significantly reduced with full pre-milking preparation compared to no pre-milking preparation when milk was produced from cows on pasture • Counts for individual bacterial species were well below maximum numbers permitted in EU Council Directive (Anon. 1992) when no pre-milking preparation was carried out

    The application of parameter sensitivity analysis methods to inverse simulation models

    Get PDF
    Knowledge of the sensitivity of inverse solutions to variation of parameters of a model can be very useful in making engineering design decisions. This paper describes how parameter sensitivity analysis can be carried out for inverse simulations generated through approximate transfer function inversion methods and also by the use of feedback principles. Emphasis is placed on the use of sensitivity models and the paper includes examples and a case study involving a model of an underwater vehicle. It is shown that the use of sensitivity models can provide physical understanding of inverse simulation solutions that is not directly available using parameter sensitivity analysis methods that involve parameter perturbations and response differencing

    Mitosene-DNA adducts. Characterization of two major DNA monoadducts formed by 1,10-bis(acetoxy)-7-methoxymitosene upon reductive activation

    Get PDF
    Reductive activation of racemic 1,10-bis(acetoxy)-7-methoxymitosene WV15 in the presence of DNA, followed by enzymatic digestion and HPLC analysis, revealed the formation of various DNA adducts. Reduction is a necessary event for adduct formation to occur. This reductive activation was performed under hypoxic conditions in various ways:  (1) chemically, using a 2-fold excess of sodium dithionite (Na2S2O4), (2) enzymatically using NADH-cytochrome c reductase, (3) electrochemically on a mercury pool working electrode, and (4) catalytically, using a H2/PtO2 system. Five different mitosene−DNA adducts were detected. These adducts were also present when poly(dG-dC) was used instead of DNA, but were absent with poly(dA-dT). All were shown to be adducts of guanine. Reduction of 1,10-dihydroxymitosene WV14 in the presence of DNA did not result in detectable adduct formation, demonstrating the importance of good leaving groups for efficient adduct formation by these mitosenes. Finally, two of the adducts were isolated and their structures elucidated, using mass spectrometry, 1H NMR and circular dichroism (CD). The structures were assigned as the diastereoisomers N2-(1‘ ‘-n-hydroxymitosen-10‘ ‘-yl), 2‘-deoxyguanosine (n = α or β). These type of adducts, in which the mitosene C-10 is covalently bonded to the N-2 of a guanosylic group, are different from the well-known mitomycin C 2‘-deoxyguanosine monoadducts, that is linked via the mitomycin C C-1 position, demonstrating that the order of reactivity of the C-1 and C-10 in these mitosenes is reversed, as compared to mitomycin C. The 7-methoxy substituent of WV15 is a likely factor causing this switch. Evidence is presented that the 7-substituent of mitosenes also influences their DNA alkylation site. Adducts 4 and 5 represent the first isolated and structurally characterized covalent adducts of DNA and a synthetic mitosene

    Randomizing a clinical trial in neuro-degenerative disease

    Get PDF
    The paper studies randomization rules for a sequential two-treatment, two-site clinical trial in Parkinson’s disease. An important feature is that we have values of responses and five potential prognostic factors from a sample of 144 patients similar to those to be enrolled in the trial. Analysis of this sample provides a model for trial analysis. The comparison of allocation rules is made by simulation yielding measures of loss due to imbalance and of potential bias. A major novelty of the paper is the use of this sample, via a two-stage algorithm, to provide an empirical distribution of covariates for the simulation; sampling of a correlated multivariate normal distribution is followed by transformation to variables following the empirical marginal distributions. Six allocation rules are evaluated. The paper concludes with some comments on general aspects of the evaluation of such rules and provides a recommendation for two allocation rules, one for each site, depending on the target number of patients to be enrolled

    On the distribution of initial masses of stellar clusters inferred from synthesis models

    Full text link
    The fundamental properties of stellar clusters, such as the age or the total initial mass in stars, are often inferred from population synthesis models. The predicted properties are then used to constrain the physical mechanisms involved in the formation of such clusters in a variety of environments. Population synthesis models cannot, however, be applied blindy to such systems. We show that synthesis models cannot be used in the usual straightforward way to small-mass clusters (say, M < few times 10**4 Mo). The reason is that the basic hypothesis underlying population synthesis (a fixed proportionality between the number of stars in the different evolutionary phases) is not fulfilled in these clusters due to their small number of stars. This incomplete sampling of the stellar mass function results in a non-gaussian distribution of the mass-luminosity ratio for clusters that share the same evolutionary conditions (age, metallicity and initial stellar mass distribution function). We review some tests that can be carried out a priori to check whether a given cluster can be analysed with the fully-sampled standard population synthesis models, or, on the contrary, a probabilistic framework must be used. This leads to a re-assessment in the estimation of the low-mass tail in the distribution function of initial masses of stellar clusters.Comment: 5 pages, 1 figure, to appear in ``Young Massive Star Clusters - Initial Conditions and Environments'', 2008, Astrophysics & Space Science, eds. E. Perez, R. de Grijs, R. M. Gonzalez Delgad

    Three principles for co-designing sustainability intervention strategies : Experiences from Southern Transylvania

    Get PDF
    Transformational research frameworks provide understanding and guidance for fostering change towards sustainability. They comprise stages of system understanding, visioning and co-designing intervention strategies to foster change. Guidance and empirical examples for how to facilitate the process of co-designing intervention strategies in real-world contexts remain scarce, especially with regard to integrating local initiatives. We suggest three principles to facilitate the process of co-designing intervention strategies that integrate local initiatives: (1) Explore existing and envisioned initiatives fostering change towards the desired future; (2) Frame the intervention strategy to bridge the gap between the present state and desired future state(s), building on, strengthening and complementing existing initiatives; (3) Identify drivers, barriers and potential leverage points for how to accelerate progress towards sustainability. We illustrate our approach via a case study on sustainable development in Southern Transylvania. We conclude that our principles were useful in the case study, especially with regards to integrating initiatives, and could also be applied in other real-world contexts.Peer reviewe

    Scale Setting in QCD and the Momentum Flow in Feynman Diagrams

    Get PDF
    We present a formalism to evaluate QCD diagrams with a single virtual gluon using a running coupling constant at the vertices. This method, which corresponds to an all-order resummation of certain terms in a perturbative series, provides a description of the momentum flow through the gluon propagator. It can be viewed as a generalization of the scale-setting prescription of Brodsky, Lepage and Mackenzie to all orders in perturbation theory. In particular, the approach can be used to investigate why in some cases the ``typical'' momenta in a loop diagram are different from the ``natural'' scale of the process. It offers an intuitive understanding of the appearance of infrared renormalons in perturbation theory and their connection to the rate of convergence of a perturbative series. Moreover, it allows one to separate short- and long-distance contributions by introducing a hard factorization scale. Several applications to one- and two-scale problems are discussed in detail.Comment: eqs.(51) and (83) corrected, minor typographic changes mad

    Use of fibroscan in assessment of hepatic fibrosis in patients with chronic hepatitis B infection

    Get PDF
    Introduction: Assessment of the stage of liver fibrosis plays a prominent role in the decision process of treatment in chronic viral hepatitis.Objective: To determine the stage of fibrosis in patients with chronic HBV infection using fibroscan.Method: This is a cross sectional descriptive study involving patients with CHB with a valid transient elastography (TE) measurement. Liver function test and platelet count was determined. APRI and FIB-4 were calculated and Spermans rank coefficient was applied for correlation of transient elastography (TE) with either serum biomarkers.Results: 190 patients were enrolled, mean age 36.3years, 64.2% males and 89.9% were asymptomatic. TE correlated significantly with APRI and FIB-4 (r = 0.58; P &lt; 0.001 and r = 0.42; P &lt; 0.001, respectively). Most of the patients 131(68.9%) had no significant fibrosis (F0,F1) while those with significant fibrosis and cirrhosis were 59 (31.1%) and 23(12.1%) respectively.Conclusion: The prevalence of significant fibrosis and cirrhosis is high in this population.Keywords: Fibroscan, Hepatic fibrosis, APRI, FIB-
    • …
    corecore