391 research outputs found

    Parameter identifiability of fundamental pharmacodynamic models

    Get PDF
    Issues of parameter identifiability of routinely used pharmacodynamics models are considered in this paper. The structural identifiability of 16 commonly applied pharmacodynamic model structures was analyzed analytically, using the input-output approach. Both fixed-effects versions (non-population, no between-subject variability) and mixed-effects versions (population, including between-subject variability) of each model structure were analyzed. All models were found to be structurally globally identifiable under conditions of fixing either one of two particular parameters. Furthermore, an example was constructed to illustrate the importance of sufficient data quality and show that structural identifiability is a prerequisite, but not a guarantee, for successful parameter estimation and practical parameter identifiability. This analysis was performed by generating artificial data of varying quality to a structurally identifiable model with known true parameter values, followed by re-estimation of the parameter values. In addition, to show the benefit of including structural identifiability as part of model development, a case study was performed applying an unidentifiable model to real experimental data. This case study shows how performing such an analysis prior to parameter estimation can improve the parameter estimation process and model performance. Finally, an unidentifiable model was fitted to simulated data using multiple initial parameter values, resulting in highly different estimated uncertainties. This example shows that although the standard errors of the parameter estimates often indicate a structural identifiability issue, reasonably “good” standard errors may sometimes mask unidentifiability issues

    Comparison of T-cell Receptor Diversity of people with Myalgic Encephalomyelitis versus controls

    Get PDF
    Objective: Myalgic Encephalomyelitis (ME; sometimes referred to as Chronic Fatigue Syndrome) is a chronic disease without laboratory test, detailed aetiological understanding or effective therapy. Its symptoms are diverse, but it is distinguished from other fatiguing illnesses by the experience of post-exertional malaise, the worsening of symptoms even after minor physical or mental exertion. Its frequent onset after infection suggests autoimmune involvement or that it arises from abnormal T-cell activation. Results: To test this hypothesis, we sequenced the genomic loci of and T-cell receptors (TCR) from 40 human blood samples from each of four groups: severely affected people with ME; mildly or moderately affected people with ME; people diagnosed with Multiple Sclerosis, as disease controls; and, healthy controls. Seeking to automatically classify these individuals’ samples by their TCR repertoires, we applied P-SVM, a machine learning method. However, despite working well on a simulated data set, this approach did not allow statistically significant partitioning of samples into the four subgroups. Our findings do not support the hypothesis that blood samples from people with ME frequently contain altered T-cell receptor diversity

    Characterizing temporary hydrological regimes at a European scale

    Get PDF
    Monthly duration curves have been constructed from climate data across Europe to help address the relative frequency of ecologically critical low flow stages in temporary rivers, when flow persists only in disconnected pools in the river bed. The hydrological model is 5 based on a partitioning of precipitation to estimate water available for evapotranspiration and plant growth and for residual runoff. The duration curve for monthly flows has then been analysed to give an estimate of bankfull flow based on recurrence interval. The corresponding frequency for pools is then based on the ratio of bank full discharge to pool flow, arguing from observed ratios of cross-sectional areas at flood 10 and low flows to estimate pool flow as 0.1% of bankfull flow, and so estimate the frequency of the pool conditions that constrain survival of river-dwelling arthropods and fish. The methodology has been applied across Europe at 15 km resolution, and can equally be applied under future climatic scenarios

    Systematic comparison of ranking aggregation methods for gene lists in experimental results

    Get PDF
    MOTIVATION: A common experimental output in biomedical science is a list of genes implicated in a given biological process or disease. The gene lists resulting from a group of studies answering the same, or similar, questions can be combined by ranking aggregation methods to find a consensus or a more reliable answer. Evaluating a ranking aggregation method on a specific type of data before using it is required to support the reliability since the property of a dataset can influence the performance of an algorithm. Such evaluation on gene lists is usually based on a simulated database because of the lack of a known truth for real data. However, simulated datasets tend to be too small compared to experimental data and neglect key features, including heterogeneity of quality, relevance and the inclusion of unranked lists. RESULTS: In this study, a group of existing methods and their variations that are suitable for meta-analysis of gene lists are compared using simulated and real data. Simulated data were used to explore the performance of the aggregation methods as a function of emulating the common scenarios of real genomic data, with various heterogeneity of quality, noise level and a mix of unranked and ranked data using 20 000 possible entities. In addition to the evaluation with simulated data, a comparison using real genomic data on the SARS-CoV-2 virus, cancer (non-small cell lung cancer) and bacteria (macrophage apoptosis) was performed. We summarize the results of our evaluation in a simple flowchart to select a ranking aggregation method, and in an automated implementation using the meta-analysis by information content algorithm to infer heterogeneity of data quality across input datasets. AVAILABILITY AND IMPLEMENTATION: The code for simulated data generation and running edited version of algorithms: https://github.com/baillielab/comparison_of_RA_methods. Code to perform an optimal selection of methods based on the results of this review, using the MAIC algorithm to infer the characteristics of an input dataset, can be downloaded here: https://github.com/baillielab/maic. An online service for running MAIC: https://baillielab.net/maic. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online

    Characterization, selection and micro-assembly of nanowire laser systems

    Get PDF
    Semiconductor nanowire (NW) lasers are a promising technology for the realization of coherent optical sources with ultrasmall footprint. To fully realize their potential in on-chip photonic systems, scalable methods are required for dealing with large populations of inhomogeneous devices that are typically randomly distributed on host substrates. In this work two complementary, high-throughput techniques are combined: the characterization of nanowire laser populations using automated optical microscopy, and a high-accuracy transfer-printing process with automatic device spatial registration and transfer. Here, a population of NW lasers is characterized, binned by threshold energy density, and subsequently printed in arrays onto a secondary substrate. Statistical analysis of the transferred and control devices shows that the transfer process does not incur measurable laser damage, and the threshold binning can be maintained. Analysis on the threshold and mode spectra of the device populations proves the potential for using NW lasers for integrated systems fabrication

    The WiggleZ Dark Energy Survey: probing the epoch of radiation domination using large-scale structure

    Get PDF
    We place the most robust constraint to date on the scale of the turnover in the cosmological matter power spectrum using data from the WiggleZ Dark Energy Survey. We find this feature to lie at a scale of k 0 = 0.0160 +0.0035 -0.0041 (h Mpc -1 ) (68 per cent confidence) for an effective redshift of z eff = 0.62 and obtain from this the first ever turnover-derived distance and cosmology constraints: a measure of the cosmic distance-redshift relation in units of the horizon scale at the redshift of radiation-matter equality (r H ) ofDV(z eff = 0.62)/r H = 18.3 +6.3 -3.3 and, assuming a prior on the number of extra relativistic degrees of freedom N eff =3, constraints on the cosmological matter density parameter Ω M h 2 = 0.136 +0.026 -0.052 and on the redshift of matter-radiation equality z eq = 3274 +631 -1260 .We stress that these results are obtained within the theoretical framework of Gaussian primordial fluctuations and linear large-scale bias. With this caveat, all results are in excellent agreement with the predictions of standard ΛCDM models. Our constraints on the logarithmic slope of the power spectrum on scales larger than the turnover are bounded in the lower limit with values only as low as -1 allowed, with the prediction of P(k) ∝ k from standard ΛCDM models easily accommodated by our results. Finally, we generate forecasts to estimate the achievable precision of future surveys at constraining k 0 , ω; M h 2 , z eq and N eff .We find that the Baryon Oscillation Spectroscopic Survey should substantially improve upon the WiggleZ turnover constraint, reaching a precision on k0 of ±9 per cent (68 per cent confidence), translating to precisions on ω M h 2 and z eq of±10 per cent (assuming a prior N eff =3) and onNeff of +78 -56 per cent (assuming a priorω M h 2 = 0.135). This represents sufficient precision to sharpen the constraints on N eff from WMAP, particularly in its upper limit. For Euclid, we find corresponding attainable precisions on (k 0 , ω M h 2 , N eff ) of (3, 4, +17 -21 ) per cent. This represents a precision approaching our forecasts for the Planck Surveyor. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society

    The WiggleZ Dark Energy Survey: final data release and cosmological results

    Get PDF
    This paper presents cosmological results from the final data release of the WiggleZ Dark Energy Survey. We perform full analyses of different cosmological models using the WiggleZ power spectra measured at z = 0.22, 0.41, 0.60, and 0.78, combined with other cosmological data sets. The limiting factor in this analysis is the theoretical modeling of the galaxy power spectrum, including nonlinearities, galaxy bias, and redshift-space distortions. In this paper we assess several different methods for modeling the theoretical power spectrum, testing them against the Gigaparsec WiggleZ simulations (GiggleZ). We fit for a base set of six cosmological parameters, {Omega(b)h(2), Omega(CDM)h(2); H-0, tau, A(s), n(s)}, and five supplementary parameters {n(run), r, w, Omega(k), Sigma m(v)}. In combination with the cosmic microwave background, our results are consistent with the Lambda CDM concordance cosmology, with a measurement of the matter density of Omega(m) = 0.29 +/- 0.016 and amplitude of fluctuations sigma(8) = 0.825 +/- 0.017. Using WiggleZ data with cosmic microwave background and other distance and matter power spectra data, we find no evidence for any of the extension parameters being inconsistent with their Lambda CDM model values. The power spectra data and theoretical modeling tools are available for use as a module for CosmoMC, which we here make publicly available at http://smp.uq.edu.au/wigglez-data. We also release the data and random catalogs used to construct the baryon acoustic oscillation correlation function

    Can we test Dark Energy with Running Fundamental Constants ?

    Full text link
    We investigate a link between the running of the fine structure constant α\alpha and a time evolving scalar dark energy field. Employing a versatile parameterization for the equation of state, we exhaustively cover the space of dark energy models. Under the assumption that the change in α\alpha is to first order given by the evolution of the Quintessence field, we show that current Oklo, Quasi Stellar Objects and Equivalence Principle observations restrict the model parameters considerably stronger than observations of the Cosmic Microwave Background, Large Scale Structure and Supernovae Ia combined.Comment: 6 pages, 5 figures, final version to appear in JCA
    • 

    corecore