76,561 research outputs found

    Spectrophotometry for cerebrospinal fluid pigment analysis

    Get PDF
    The use of spectrophotometry for the analysis of the cerebrospinal fluid (CSF) is reviewed. The clinically relevant CSF pigments--oxyhemoglobin and bilirubin--are introduced and discussed with regard to clinical differential diagnosis and potentially confounding variables (the four T's: traumatic tap, timing, total protein, and total bilirubin). The practical laboratory aspects of spectrophotometry and automated techniques are presented in the context of analytical and clinical specificity and sensitivity. The perceptual limitations of human color vision are highlighted and the use of visual assessment of the CSF is discouraged in light of recent evidence from a national audit in the United Kingdom. Finally, future perspectives including the need for longitudinal CSF profiling and routine spectrophotometric calibration are outlined

    Character analysis of oral activity: contact profiling

    Get PDF
    The article presents the results of our observations on syntactic, semantic and plot peculiarities of oral language activity, we find it justified to consider the above mentioned parameters as identification criteria for discovering characterological differences of Ukrainian-speaking and Russian-speaking objects of contact profiling. It describes the connection between mechanisms of psychological defenses as the character structural components, and agentive and non-agentive speech constructions, internal and external predicates. Localized and described plots of oral narratives inherent to representatives of different character types

    Joining Forces of Bayesian and Frequentist Methodology: A Study for Inference in the Presence of Non-Identifiability

    Full text link
    Increasingly complex applications involve large datasets in combination with non-linear and high dimensional mathematical models. In this context, statistical inference is a challenging issue that calls for pragmatic approaches that take advantage of both Bayesian and frequentist methods. The elegance of Bayesian methodology is founded in the propagation of information content provided by experimental data and prior assumptions to the posterior probability distribution of model predictions. However, for complex applications experimental data and prior assumptions potentially constrain the posterior probability distribution insufficiently. In these situations Bayesian Markov chain Monte Carlo sampling can be infeasible. From a frequentist point of view insufficient experimental data and prior assumptions can be interpreted as non-identifiability. The profile likelihood approach offers to detect and to resolve non-identifiability by experimental design iteratively. Therefore, it allows one to better constrain the posterior probability distribution until Markov chain Monte Carlo sampling can be used securely. Using an application from cell biology we compare both methods and show that a successive application of both methods facilitates a realistic assessment of uncertainty in model predictions.Comment: Article to appear in Phil. Trans. Roy. Soc.

    Genomic and proteomic profiling for cancer diagnosis in dogs

    Get PDF
    Global gene expression, whereby tumours are classified according to similar gene expression patterns or ā€˜signaturesā€™ regardless of cell morphology or tissue characteristics, is being increasingly used in both the human and veterinary fields to assist in cancer diagnosis and prognosis. Many studies on canine tumours have focussed on RNA expression using techniques such as microarrays or next generation sequencing. However, proteomic studies combining two-dimensional polyacrylamide gel electrophoresis or two-dimensional differential gel electrophoresis with mass spectrometry have also provided a wealth of data on gene expression in tumour tissues. In addition, proteomics has been instrumental in the search for tumour biomarkers in blood and other body fluids

    Diverse correlation structures in gene expression data and their utility in improving statistical inference

    Full text link
    It is well known that correlations in microarray data represent a serious nuisance deteriorating the performance of gene selection procedures. This paper is intended to demonstrate that the correlation structure of microarray data provides a rich source of useful information. We discuss distinct correlation substructures revealed in microarray gene expression data by an appropriate ordering of genes. These substructures include stochastic proportionality of expression signals in a large percentage of all gene pairs, negative correlations hidden in ordered gene triples, and a long sequence of weakly dependent random variables associated with ordered pairs of genes. The reported striking regularities are of general biological interest and they also have far-reaching implications for theory and practice of statistical methods of microarray data analysis. We illustrate the latter point with a method for testing differential expression of nonoverlapping gene pairs. While designed for testing a different null hypothesis, this method provides an order of magnitude more accurate control of type 1 error rate compared to conventional methods of individual gene expression profiling. In addition, this method is robust to the technical noise. Quantitative inference of the correlation structure has the potential to extend the analysis of microarray data far beyond currently practiced methods.Comment: Published in at http://dx.doi.org/10.1214/07-AOAS120 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimal Rate of Direct Estimators in Systems of Ordinary Differential Equations Linear in Functions of the Parameters

    Get PDF
    Many processes in biology, chemistry, physics, medicine, and engineering are modeled by a system of differential equations. Such a system is usually characterized via unknown parameters and estimating their 'true' value is thus required. In this paper we focus on the quite common systems for which the derivatives of the states may be written as sums of products of a function of the states and a function of the parameters. For such a system linear in functions of the unknown parameters we present a necessary and sufficient condition for identifiability of the parameters. We develop an estimation approach that bypasses the heavy computational burden of numerical integration and avoids the estimation of system states derivatives, drawbacks from which many classic estimation methods suffer. We also suggest an experimental design for which smoothing can be circumvented. The optimal rate of the proposed estimators, i.e., their n\sqrt n-consistency, is proved and simulation results illustrate their excellent finite sample performance and compare it to other estimation approaches

    Determining Structurally Identifiable Parameter Combinations Using Subset Profiling

    Full text link
    Identifiability is a necessary condition for successful parameter estimation of dynamic system models. A major component of identifiability analysis is determining the identifiable parameter combinations, the functional forms for the dependencies between unidentifiable parameters. Identifiable combinations can help in model reparameterization and also in determining which parameters may be experimentally measured to recover model identifiability. Several numerical approaches to determining identifiability of differential equation models have been developed, however the question of determining identifiable combinations remains incompletely addressed. In this paper, we present a new approach which uses parameter subset selection methods based on the Fisher Information Matrix, together with the profile likelihood, to effectively estimate identifiable combinations. We demonstrate this approach on several example models in pharmacokinetics, cellular biology, and physiology

    Optimization of miRNA-seq data preprocessing.

    Get PDF
    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments
    • ā€¦
    corecore