232,928 research outputs found

    Automated Spatial Brain Normalization and Hindbrain White Matter Reference Tissue Give Improved [F-18]-Florbetaben PET Quantitation in Alzheimer's Model Mice

    Get PDF
    Preclinical PET studies of 13-amyloid (A beta) accumulation are of growing importance, but comparisons between research sites require standardized and optimized methods for quantitation. Therefore, we aimed to evaluate systematically the (1) impact of an automated algorithm for spatial brain normalization, and (2) intensity scaling methods of different reference regions for A beta-PET in a large dataset of transgenic mice. PS2APP mice in a 6 week longitudinal setting (N = 37) and another set of PS2APP mice at a histologically assessed narrow range of A beta burden (N = 40) were investigated by florbetaben PET Manual spatial normalization by three readers at different training levels was performed prior to application of an automated brain spatial normalization and inter -reader agreement was assessed by Fleiss Kappa (kappa). For this method the impact of templates at different pathology stages was investigated. Four different reference regions on brain uptake normalization were used to calculate frontal cortical standardized uptake value ratios (SUVRc-rx/REF) relative to raw SUVCTX. Results were compared on the basis of longitudinal stability (Cohen's d), and in reference to gold standard histopathological quantitation (Pearson's R). Application of an automated brain spatial normalization resulted in nearly perfect agreement (all If kappa >= 0.99) between different readers, with constant or improved correlation with histology. Templates based on inappropriate pathology stage resulted in up to 2.9% systematic bias for SUVRc-Fx, /REF " All SUVRG-Fx, /REF methods performed better than SUVGTx both with regard to longitudinal stability (d >= 1.21 vs. d = 0.23) and histological gold standard agreement (R >= 0.66 vs. R >= 0.31). Voxel-wise analysis suggested a physiologically implausible longitudinal decrease by global mean scaling. The hindbrain white matter reference (R-mean = 0.75

    Vector-valued impact measures and generation of specific indexes for research assessment

    Full text link
    A mathematical structure for defining multi-valued bibliometric indices is provided with the aim of measuring the impact of general sources of information others than articles and journals-for example, repositories of datasets. The aim of the model is to use several scalar indices at the same time for giving a measure of the impact of a given source of information, that is, we construct vector valued indices. We use the properties of these vector valued indices in order to give a global answer to the problem of finding the optimal scalar index for measuring a particular aspect of the impact of an information source, depending on the criterion we want to fix for the evaluation of this impact. The main restrictions of our model are (1) it uses finite sets of scalar impact indices (altmetrics), and (2) these indices are assumed to be additive. The optimization procedure for finding the best tool for a fixed criterion is also presented. In particular, we show how to create an impact measure completely adapted to the policy of a specific research institution.Calabuig, JM.; Ferrer Sapena, A.; Sánchez Pérez, EA. (2016). Vector-valued impact measures and generation of specific indexes for research assessment. Scientometrics. 108(3):1425-1443. doi:10.1007/s11192-016-2039-6S142514431083Aleixandre Benavent, R., Valderrama Zurián, J. C., & González Alcaide, G. (2007). Scientific journals impact factor: Limitations and alternative indicators. El Profesional de la Información, 16(1), 4–11.Alguliyev, R., Aliguliyev, R. & Ismayilova, N. (2015). Weighted impact factor (WIF) for assessing the quality of scientific journals. arXiv:1506.02783Beauzamy, B. (1982). Introduction to Banach spaces and their geometry. Amsterdam: North-Holland.Beliakov, G., & James, S. (2011). Citation-based journal ranks: the use of fuzzy measures. Fuzzy Sets and Systems, 167, 101–119.Buela-Casal, G. (2003). Evaluating quality of articles and scientific journals. Proposal of weighted impact factor and a quality index. Psicothema, 15(1), 23–25.Diestel, J., & Uhl, J. J. (1977). Vector measures. Providence: Am. Math. Soc.Dorta-González, P., & Dorta-González, M. I. (2013). Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor. Scientometrics, 95(2), 645–672.Dorta-González, P., Dorta-González, M. I., Santos-Penate, D. R., & Suarez-Vega, R. (2014). Journal topic citation potential and between-field comparisons: The topic normalized impact factor. Journal of Informetrics, 8(2), 406–418.Egghe, L., & Rousseau, R. (2002). A general frame-work for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F. & Aleixandre-Benavent, R. (2016). The impact factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Research Evaluation, 1–9. doi: 10.1093/reseval/rvv041 .Ferrer-Sapena, A., Sánchez-Pérez, E. A., González, L. M., Peset, F., & Aleixandre-Benavent, R. (2015). Mathematical properties of weighted impact factors based on measures of prestige of the citing journals. Scientometrics, 105(3), 2089–2108.Gagolewski, M., & Mesiar, R. (2014). Monotone measures and universal integrals in a uniform framework for the scientific impact assessment problem. Information Sciences, 263, 166–174.Habibzadeh, F., & Yadollahie, M. (2008). Journal weighted impact factor: A proposal. Journal of Informetrics, 2(2), 164–172.Klement, E., Mesiar, R., & Pap, E. (2010). A universal integral as common frame for Choquet and Sugeno integral. IEEE Transactions on Fuzzy Systems, 18, 178–187.Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61, 2365–2369.Li, Y. R., Radicchi, F., Castellano, C., & Ruiz-Castillo, J. (2013). Quantitative evaluation of alternative field normalization procedures. Journal of Informetrics, 7(3), 746–755.Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4, 265–277.Owlia, P., Vasei, M., Goliaei, B., & Nassiri, I. (2011). Normalized impact factor (NIF): An adjusted method for calculating the citation rate of biomedical journals. Journal of Biomedical Informatics, 44(2), 216–220.Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.Piwowar, H. (2013). Altmetrics: Value all research products. Nature, 493(7431), 159–159.Pudovkin,A.I., & Garfield, E. (2004). Rank-normalized impact factor: A way to compare journal performance across subject categories. In Proceedings of the 67th annual meeting of the American Society for Information science and Technology, 41, 507-515.Rousseau, R. (2002). Journal evaluation: Technical and practical issues. Library Trends, 50(3), 418–439.Ruiz Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9, 102–117.Torra, V., & Narukawa, Y. (2008). The h-index and the number of citations: Two fuzzy integrals. IEEE Transactions on Fuzzy Systems, 16, 795–797.Waltman, L., & van Eck, N. J. (2008). Some comments on the journal weighted impact factor proposed by Habibzadeh and Yadollahie. Journal of Informetrics, 2(4), 369–372.Waltman, L., & van Eck, N. J. (2010). The relation between Eigenfactor, audience factor, and influence weight. Journal of the American Society for Information Science and Technology, 61, 1476–1486.Zahedi, Z., Costas, R., & Wouters, P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ’alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.Zitt, M. (2010). Citing-side normalization of journal impact: A robust variant of the Audience Factor. Journal of Informetrics, 4(3), 392–406.Zitt, M. (2011). Behind citing-side normalization of citations: Some properties of the journal impact factor. Scientometrics, 89, 329–344.Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59, 1856–1860.Zyczkowski, K. (2010). Citation graph, weighted impact factors and performance indices. Scientometrics, 85(1), 301–315

    Profound effect of profiling platform and normalization strategy on detection of differentially expressed microRNAs

    Get PDF
    Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms. We included data from miRNA profiling analyses derived from a hybridization-based platform (Agilent Technologies) and an RT-qPCR platform (Applied Biosystems). Furthermore, we validated a subset of miRNAs by individual RT-qPCR assays. Our analyses incorporated data from the effect of differentiation and tumor necrosis factor alpha treatment on primary human skeletal muscle cells and a murine skeletal muscle cell line. Distinct normalization methods differed in their impact on (i) standard deviations, (ii) the area under the receiver operating characteristic (ROC) curve, (iii) the similarity of differential expression. Loess, loessM, and quantile analysis were most effective in minimizing standard deviations on the Agilent and TLDA platform. Moreover, loess, loessM, invariant selection and generalized procrustes analysis increased the area under the ROC curve, a measure for the statistical performance of a test. The Jaccard index revealed that inter-platform concordance of differential expression tended to be increased by loess, loessM, quantile, and GPA normalization of AGL and TLDA data as well as RGI normalization of TLDA data. We recommend the application of loess, or loessM, and GPA normalization for miRNA Agilent arrays and qPCR cards as these normalization approaches showed to (i) effectively reduce standard deviations, (ii) increase sensitivity and accuracy of differential miRNA expression detection as well as (iii) increase inter-platform concordance. Results showed the successful adoption of loessM and generalized procrustes analysis to one-color miRNA profiling experiments

    Rivals for the crown: Reply to Opthof and Leydesdorff

    Get PDF
    We reply to the criticism of Opthof and Leydesdorff [arXiv:1002.2769] on the way in which our institute applies journal and field normalizations to citation counts. We point out why we believe most of the criticism is unjustified, but we also indicate where we think Opthof and Leydesdorff raise a valid point

    An Integrated Impact Indicator (I3): A New Definition of "Impact" with Policy Relevance

    Full text link
    Allocation of research funding, as well as promotion and tenure decisions, are increasingly made using indicators and impact factors drawn from citations to published work. A debate among scientometricians about proper normalization of citation counts has resolved with the creation of an Integrated Impact Indicator (I3) that solves a number of problems found among previously used indicators. The I3 applies non-parametric statistics using percentiles, allowing highly-cited papers to be weighted more than less-cited ones. It further allows unbundling of venues (i.e., journals or databases) at the article level. Measures at the article level can be re-aggregated in terms of units of evaluation. At the venue level, the I3 creates a properly weighted alternative to the journal impact factor. I3 has the added advantage of enabling and quantifying classifications such as the six percentile rank classes used by the National Science Board's Science & Engineering Indicators.Comment: Research Evaluation (in press

    A novel and universal method for microRNA RT-qPCR data normalization

    Get PDF
    Gene expression analysis of microRNA molecules is becoming increasingly important. In this study we assess the use of the mean expression value of all expressed microRNAs in a given sample as a normalization factor for microRNA real-time quantitative PCR data and compare its performance to the currently adopted approach. We demonstrate that the mean expression value outperforms the current normalization strategy in terms of better reduction of technical variation and more accurate appreciation of biological changes
    • …
    corecore