952,032 research outputs found
Statistical presentation and analysis of ordinal data in nursing research.
Objectives: The aim of this study was to review the presentation and analysis of ordinal data in three international nursing journals in 2003. Method: In total, 166 full-length articles from the 2003 editions of Cancer Nursing, Scandinavian Journal of Caring Sciences and Nursing Research were reviewed for their use of ordinal data. Results: This review showed that ordinal scales were used in about a third of the articles. However, only about half of the articles that used ordinal data had appropriate data presentation and only about half of the analyses of the ordinal data were performed properly. Conclusions: Ordinal data are rather common in nursing research, but a large share of the studies do not present/analyse the result properly. Incorrect presentation and analysis of the data may lead to bias and reduced ability to detect statistical differences or effects, resulting in misleading information. This highlights the importance of knowledge about data level, and underlying assumptions for the statistical tests must be considered to ensure correct presentation and analyses of data
Another look at anomalous J/Psi suppression in Pb+Pb collisions at P/A = 158 GeV/c
A new data presentation is proposed to consider anomalous
suppression in Pb + Pb collisions at GeV/c. If the inclusive
differential cross section with respect to a centrality variable is available,
one can plot the yield of J/Psi events per Pb-Pb collision as a function of an
estimated squared impact parameter. Both quantities are raw experimental data
and have a clear physical meaning. As compared to the usual J/Psi over
Drell-Yan ratio, there is a huge gain in statistical accuracy. This
presentation could be applied advantageously to many processes in the field of
nucleus-nucleus collisions at various energies.Comment: 6 pages, 5 figures, submitted to The European Physical Journal C;
minor revisions for final versio
Recommended from our members
Are geometric morphometric analyses replicable? Evaluating landmark measurement error and its impact on extant and fossil Microtus classification.
Geometric morphometric analyses are frequently employed to quantify biological shape and shape variation. Despite the popularity of this technique, quantification of measurement error in geometric morphometric datasets and its impact on statistical results is seldom assessed in the literature. Here, we evaluate error on 2D landmark coordinate configurations of the lower first molar of five North American Microtus (vole) species. We acquired data from the same specimens several times to quantify error from four data acquisition sources: specimen presentation, imaging devices, interobserver variation, and intraobserver variation. We then evaluated the impact of those errors on linear discriminant analysis-based classifications of the five species using recent specimens of known species affinity and fossil specimens of unknown species affinity. Results indicate that data acquisition error can be substantial, sometimes explaining >30% of the total variation among datasets. Comparisons of datasets digitized by different individuals exhibit the greatest discrepancies in landmark precision, and comparison of datasets photographed from different presentation angles yields the greatest discrepancies in species classification results. All error sources impact statistical classification to some extent. For example, no two landmark dataset replicates exhibit the same predicted group memberships of recent or fossil specimens. Our findings emphasize the need to mitigate error as much as possible during geometric morphometric data collection. Though the impact of measurement error on statistical fidelity is likely analysis-specific, we recommend that all geometric morphometric studies standardize specimen imaging equipment, specimen presentations (if analyses are 2D), and landmark digitizers to reduce error and subsequent analytical misinterpretations
Location as a determinant of accommodation prices: managerial approach
In the presentation authors discuss the location-based factors’ impact on accommodation prices. The aim of the presentation is to compare the results of qualitative and quantitative research on location-based determinants of accommodation prices in Lodz Metropolitan Area (Poland). The authors employ methodological triangulation (Yeung 2000), both to explore statistical significance of location-based determinants of accommodation prices, and to present managerial opinions about the influence of location on accommodation prices.The research was financially supported by Polish Ministry of Science and Higher Education (Subsidy for young scientists No 545/392 and 545/915). The authors are grateful to the students of tourism and recreation program at University of Lodz, supporting data collection procedure and conducting in-depth interviews
Population Growth and Other Statistics of Middle-sized Irish Towns. General Research Series Paper No. 85, April 1976
The basic aim of the study is the presentation of tables of comparative
statistical data relating to 97 towns with population 5OO-1O,OOO in
1971 and analyses of such data. The exclusion of the four County Boroughs
and Dun Laoghaire together with twelve other large towns and all small
towns and villages, was to impart a degree of homogeneity to the inquiry, as
regards function of town. The 97 towns range from Mullingar, the largest
with a population of 9,245 to Cootehill with 1,542
Corporate Governance as a Tool for Curbing Bank Distress in Nigeria Deposit Money Bank: Empirical Evidence
The study objective is aimed at finding the relationship between corporate governance bank distress in deposit money banks. The research design adopted in this paper is the case study method, in other to have an intensive insight of the subject matter. Primary data was used specifically the survey technique.The method that was used in the presentation of data in this study is the Statistical Package for Social Sciences(SPSS) which contains all the necessary and important statistical technique for data analysis. For testing the hypothesis, correlation analysis which measures the degree of relationship between variables was used to analyze the result generated from the questionnaire. The evidence shows that corporate governance
has no significant improvement on the prevention of bank distress but has significantly improved the performance
of the Nigerian banking sector. We therefore recommend that banks should demonstrate strong internal policies to identify and manage conflict of interest and zero tolerance posture against cases of unsound corporate governance practices
Teaching Data Science
We describe an introductory data science course, entitled Introduction to
Data Science, offered at the University of Illinois at Urbana-Champaign. The
course introduced general programming concepts by using the Python programming
language with an emphasis on data preparation, processing, and presentation.
The course had no prerequisites, and students were not expected to have any
programming experience. This introductory course was designed to cover a wide
range of topics, from the nature of data, to storage, to visualization, to
probability and statistical analysis, to cloud and high performance computing,
without becoming overly focused on any one subject. We conclude this article
with a discussion of lessons learned and our plans to develop new data science
courses.Comment: 10 pages, 4 figures, International Conference on Computational
Science (ICCS 2016
Clarify: Software for Interpreting and Presenting Statistical Results
Clarify is a program that uses Monte Carlo simulation to convert the raw output of statistical procedures into results that are of direct interest to researchers, without changing statistical assumptions or requiring new statistical models. The program, designed for use with the Stata statistics package, offers a convenient way to implement the techniques described in: Gary King, Michael Tomz, and Jason Wittenberg (2000). "Making the Most of Statistical Analyses: Improving Interpretation and Presentation." American Journal of Political Science 44, no. 2 (April 2000): 347-61. We recommend that you read this article before using the software. Clarify simulates quantities of interest for the most commonly used statistical models, including linear regression, binary logit, binary probit, ordered logit, ordered probit, multinomial logit, Poisson regression, negative binomial regression, weibull regression, seemingly unrelated regression equations, and the additive logistic normal model for compositional data. Clarify Version 2.1 is forthcoming (2003) in Journal of Statistical Software.
- …