5,897 research outputs found

    Controlled Optimal Design Program for the Logit Dose Response Model

    Get PDF
    The assessment of dose-response is an integral component of the drug development process. Parallel dose-response studies are conducted, customarily, in preclinical and phase 1, 2 clinical trials for this purpose. Practical constraints on dose range, dose levels and dose proportions are intrinsic issues in the design of dose response studies because of drug toxicity, efficacy, FDA regulations, protocol requirements, clinical trial logistics, and marketing issues. We provide a free on-line software package called Controlled Optimal Design 2.0 for generating controlled optimal designs that can incorporate prior information and multiple objectives, and meet multiple practical constraints at the same time. Researchers can either run the web-based design program or download its stand-alone version to construct the desired multiple-objective controlled Bayesian optimal designs. Because researchers often adopt ad-hoc design schemes such as the equal allocation rules without knowing how efficient such designs would be for the design problem, the program also evaluates the efficiency of user-supplied designs

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 144

    Get PDF
    This bibliography lists 257 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1975

    Characterising bias in regulatory risk and decision analysis: An analysis of heuristics applied in health technology appraisal, chemicals regulation, and climate change governance

    Get PDF
    In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error ā€“ uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting ā€œrecovery schemesā€ to correct for known biases; and basing decision rules on clearly articulated values and evidence, rather than convention

    Applications of Statistical Experimental Designs to Improve Statistical Inference in Weed Management

    Get PDF
    In a balanced design, researchers allocate the same number of units across all treatment groups. It has been believed as a rule of thumb among some researchers in agriculture. Sometimes, an unbalanced design outperforms a balanced design. Given a specific parameter of interest, researchers can design an experiment by unevenly distributing experimental units to increase statistical information about the parameter of interest. An additional way of improving an experiment is an adaptive design (e.g., spending the total sample size in multiple steps). It is helpful to have some knowledge about the parameter of interest to design an experiment. In the initial phase of an experiment, a researcher may spend a portion of the total sample size to learn about the parameter of interest. In the later phase, the remaining portion of the sample size can be distributed in order to gain more information about the parameter of interest. Though such ideas have existed in statistical literature, they have not been applied broadly in agricultural studies. In this article, we used simulations to demonstrate the superiority of the experimental designs over the balanced designs under three practical situations: comparing two groups, studying a dose-response relationship with right-censored data, and studying a synergetic effect of two treatments. The simulations showed that an objective-specific design provides smaller error in parameter estimation and higher statistical power in hypothesis testing when compared to a balanced design. We also conducted an adaptive experimental design applied to a dose-response study with right-censored data to quantify the effect of ethanol on weed control. Retrospective simulations supported the benefit of this adaptive design as well. All researchers face different practical situations, and appropriate experimental designs will help utilize available resources efficiently

    Efficient Parameter Estimation in Preclinical Animal Pharmacokinetic Studies

    Get PDF
    An estimation of the average value of pharmacokinetic parameters in a group of animals provides limited information if there is no good measure of the variability of each of the parameters. The traditional approach used in the analysis of animal pharmacokinetic data obtained from studies involving the use of small laboratory animals (rats or mice) in which each animal supplies only one concentration - time point does not provide this, nor can it assess the influence of physiology (or pathology) on pharmacokinetics. The consideration of variability within the same species during interspecies scaling has been advocated (Vocci & Farber, 1988). Thus, provision should be made for the estimation of variability inherent in an animal population in analysing data obtained by "destructive sampling". The NONMEM approach does, however, provide estimates of both average values of pharmacokinetic parameters and their statistical distribution within the population. In this thesis data were generated by simulation (assuming no covariance), and analysed using the NONMEM program. The efficiency of this approach is the focus of this thesis. Experimental error, number of samples taken, and the arrangement of samples in time are factors which must be taken into account in designing experiments for efficient parameter estimation. In addition, appropriate methods of data analysis must be used to extract the required information from the data. Simulated data sets were used to investigate the effect of various design features on the efficiency of parameter estimation using the one observation per animal design. In addition, the efficiency with which parameters could be estimated given a range of parameter values and variability was investigated. Several methods were used to determine the efficiency of parameter estimation. Prediction error (bias and precision) was useful in assessing the efficiency with which individual parameters were estimated. In addition, the 99% individual and joint confidence intervals containing the true parameter 95% of the time for all parameters were introduced as aids to judging the efficiency of estimation of individual and all parameters of a model, considered as a set. Confidence interval tables were constructed to reveal the influence of bias and standard error on parameter estimation. Also, the design number, a new statistic which combines the contributions of bias and precision in judging the efficiency of parameter estimation, was introduced to complement bias and precision, and confidence intervals methods of analysis. The design number also allowed the efficiency with which all parameters of a model were estimated as a set to be judged. The incidence of high pairwise correlations of parameter estimates was also taken into account in assessing the acceptability of estimates and the adequacy of model parameterization. Assuming IV bolus injection with the monoexponential pharmacokinetic model, simulation studies were carried out to investigate the influence of interanimal variability on the estimation of population pharmacokinetic parameters and their variances. The range of variability investigated was similar to that expected in real studies, and sampling was done at set times. The efficiency of estimation of the structural model parameters (Cl and V) was good, on average, irrespective of the variability in Cl and V. However, the estimation of these parameters was associated with negative bias which was attributed to the nature of the NONMEM program (i. e. estimation error since negative bias was also observed in subsequent studies in which dE was set to 0%). The variance parameters were mostly inefficiently estimated in this study and all other studies using the one observation per animal design. This was attributable to the lack of information in the data set about dE. When the effect of the arrangement of concentrations in time on parameter estimation was studied with the two sample point design, efficient parameter estimates were obtained when the first sample was obtained as early as possible (5 min. ) and the second sample was located at > 1.4 times the simulated t1/2 (84min. ) of the drug. When three or four sample points were used the exact location of the third or fourth sample was not critical to efficient parameter estimation. The efficiency of parameter estimation was investigated given a range of parameter values, concentration measurement error, and sampling schedules with the two compartment model parameterized as A, a, B, B and assuming IV bolus injection with animals sampled at set times. The parameters, considered as a set, were efficiently estimated when a was in the range of 2.0 to 4.0 h-1 and the A:B ratio in the range of 2.5 to 30.0. These results were attributed to the distribution of data points between the distribution and elimination phases of the plasma concentration - time profile. Concentration measurement error greater than 10% yielded variance parameter estimates with a greater degree of bias and imprecision

    Karl E. Peace papers

    Get PDF
    This collection consists of the personal and research papers of Karl E. Peace, Professor of Biostatistics at Georgia Southern University and namesake of the Karl E. Peace Center for Biostatistics and Survey Research. Materials span 1941to 2018 and include, correspondence, teaching materials, published articles, and manuscripts. A small portion of 3 photographs and artists renderings are also included. This collection is still undergoing processing. Find this collection in the University Libraries\u27 catalog.https://digitalcommons.georgiasouthern.edu/finding-aids/1100/thumbnail.jp
    • ā€¦
    corecore