112 research outputs found
Learning Outcomes supporting the integration of Ethical Reasoning into quantitative courses: Three tasks for use in three general contexts
This paper gives a brief overview of cognitive and educational sciences'
perspectives on learning outcomes (LOs) to facilitate the integration of LOs
specific to ethical reasoning into any mathematics or quantitative course. The
target is undergraduate (adult) learners but these LOs can be adapted for
earlier and later stages of learning. Core contents of Ethical Reasoning are:
1. its six constituent knowledge, skills, and abilities; 2. a stakeholder
analysis; and 3. ethical practice standards or guidelines. These are briefly
summarized. Five LOs are articulated at each of three levels of cognitive
complexity (low/med/high), and a set of assignment features that can be adapted
repeatedly over a term are given supporting these LOs. These features can
support authentic development of the knowledge, skills, and abilities that are
the target of ethical reasoning instruction in math and quantitative courses at
the tertiary level. Three contexts by which these can be integrated are
Assumption (what if the assumption fails?), Approximation (what if the
approximation does not hold?), and Application (is the application appropriate?
what if it is not?). One or more of the three core contents of Ethical
Reasoning can be added to any problem already utilized in a course (or new
ones) by asking learners to apply the core to the context. Engagement with
ethical reasoning can prepare students to assume their responsibilities to
promote and perpetuate the integrity of their profession across their careers
using mathematics, statistics, data science, and other quantitative methods and
technologies.Comment: 21 pages; 8 tables, 1 Figur
Stewardship of Mathematics: Essential Training for Contributors to, and Users of, the Practice of Mathematics
A steward of the discipline was originally defined as an individual to whom “we can entrust the vigor, quality, and integrity of the field”, and more specifically, as “someone who will creatively generate new knowledge, critically conserve valuable and useful ideas, and responsibly transform those understandings through writing, teaching, and application” [8]. Originally articulated for doctoral education, in 2019 the construct of stewardship was expanded so that it can also be applied to non-academic practitioners in any field, and can be initiated earlier than doctoral education [18]. In this paper, we apply this construct to the context of mathematics, and argue that even for those early in their training in mathematics, stewardly practice of mathematics can be introduced and practiced. Postsecondary and tertiary education in mathematics — for future mathematicians as well as those who will use math at work — can include curriculum-spanning training, and documented achievement in stewardship. Even before a formal ethical practice standard for mathematics is developed and deployed to help inculcate math students with a “tacit responsibility for the quality and integrity of their own work”, higher education can begin to shape student attitudes towards stewardly professional identities. Learning objectives to accomplish this are described, to assist math instructors in facilitating the recognition and acceptance of responsibility for the quality and integrity of their own work and that of colleagues in the practice of mathematics
How does international guidance for statistical practice align with the ASA Ethical Guidelines?
Gillikin (2017) defines a 'practice standard' as a document to 'define the
way the profession's body of knowledge is ethically translated into day-to-day
activities' (Gillikin 2017, p. 1). Such documents fulfill three objectives:
they 1) define the profession; 2) communicate uniform standards to
stakeholders; and 3) reduce conflicts between personal and professional conduct
(Gillikin, 2017 p. 2). However, there are many guidelines - this is due to
different purposes that guidance writers may have, as well as to the fact that
there are different audiences for the many guidance documents. The existence of
diverse statements do not necessarily make it clear that there are
commonalities; and while some statements are explicitly aspirational,
professionals as well as the public need to know that ethically-trained
practitioners follow accepted practice standards. This paper applies the
methodological approach described in Tractenberg (2023) and demonstrated in
Park and Tractenberg (2023) to study alignment among international guidance for
official statistics, and between these guidance documents and the ASA Ethical
Guidelines for Statistical Practice functioning as an ethical practice standard
(Tractenberg, 2022-A, 2022-B; after Gillikin 2017). In the spirit of exchanging
experiences and lessons learned, we discuss how our findings could inform
closer examination, clarification, and, if beneficial, possible revision of
guidance in the future.Comment: 53 pages; 28 page document with 11 summary tables; the 2022 ASA
Ethical Guidelines for Statistical Practice and 10 detailed tables. arXiv
admin note: text overlap with arXiv:2309.0718
How do ASA Ethical Guidelines Support U.S. Guidelines for Official Statistics?
In 2022, the American Statistical Association revised its Ethical Guidelines
for Statistical Practice. Originally issued in 1982, these Guidelines describe
responsibilities of the 'ethical statistical practitioner' to their profession,
to their research subjects, as well as to their community of practice. These
guidelines are intended as a framework to assist decision-making by
statisticians working across academic, research, and government environments.
For the first time, the 2022 Guidelines describe the ethical obligations of
organizations and institutions that use statistical practice. This paper
examines alignment between the ASA Ethical Guidelines and other
long-established normative guidelines for US official statistics: the OMB
Statistical Policy Directives 1, 2, and 2a NASEM Principles and Practices, and
the OMB Data Ethics Tenets. Our analyses ask how the recently updated ASA
Ethical Guidelines can support these guidelines for federal statistics and data
science. The analysis uses a form of qualitative content analysis, the
alignment model, to identify patterns of alignment, and potential for tensions,
within and across guidelines. The paper concludes with recommendations to
policy makers when using ethical guidance to establish parameters for policy
change and the administrative and technical controls that necessarily follow.Comment: 74 pages total; 25 page manuscript with 4 summary tables, the 2022
ASA Ethical Guidelines for Statistical Practice, and 10 detailed tables in
Anne
Accuracy and consistency in discovering dimensionality by correlation constraint analysis and common factor analysis
An important application of multivariate analysis is the estimation of the underlying dimensions of an instrument or set of variables. Estimation of dimensions is often pursued with the objective of finding the single factor or dimension to which each observed variable belongs or by which it is most strongly influenced. This can involve estimating the loadings of observed variables on a pre-specified number of factors, achieved by common factor analysis (CFA) of the covariance or correlational structure of the observed variables. Another method, correlation constraint analysis (CCA), operates on the determinants of all 2x2 submatrices of the covariance matrix of the variables. CCA software also determines if partialling out the effects of any observed variable affects observed correlations, the only exploratory method to specifically rule out (or identify) observed variables as being the cause of correlations among observed variables. CFA estimates the strengths of associations between factors, hypothesized to underlie or cause observed correlations, and the observed variables; CCA does not estimate factor loadings but can uncover mathematical evidence of the causal relationships hypothesized between factors and observed variables. These are philosophically and analytically diverse methods for estimating the dimensionality of a set of variables, and each can be useful in understanding the simple structure in multivariate data. This dissertation studied the performances of these methods at uncovering the dimensionality of simulated data under conditions of varying sample size and model complexity, the presence of a weak factor, and correlated vs. independent factors. CCA was sensitive (performed significantly worse) when these conditions were present in terms of omitting more factors, and omitting and mis-assigning more indicators. CFA was also found to be sensitive to all but one condition (whether factors were correlated or not) in terms of omitting factors; it was sensitive to all conditions in terms of omitting and mis-assigning indicators, and it also found extra factors depending on the number of factors in the population, the purity of factors and the presence of a weak factor. This is the first study of CCA in data with these specific features of complexity, which are common in multivariate data
The Mastery Rubric for Statistics and Data Science: promoting coherence and consistency in data science education and training
Consensus based publications of both competencies and undergraduate
curriculum guidance documents targeting data science instruction for higher
education have recently been published. Recommendations for curriculum features
from diverse sources may not result in consistent training across programs. A
Mastery Rubric was developed that prioritizes the promotion and documentation
of formal growth as well as the development of independence needed for the 13
requisite knowledge, skills, and abilities for professional practice in
statistics and data science, SDS. The Mastery Rubric, MR, driven curriculum can
emphasize computation, statistics, or a third discipline in which the other
would be deployed or, all three can be featured. The MR SDS supports each of
these program structures while promoting consistency with international,
consensus based, curricular recommendations for statistics and data science,
and allows 'statistics', 'data science', and 'statistics and data science'
curricula to consistently educate students with a focus on increasing learners
independence. The Mastery Rubric construct integrates findings from the
learning sciences, cognitive and educational psychology, to support teachers
and students through the learning enterprise. The MR SDS will support higher
education as well as the interests of business, government, and academic work
force development, bringing a consistent framework to address challenges that
exist for a domain that is claimed to be both an independent discipline and
part of other disciplines, including computer science, engineering, and
statistics. The MR-SDS can be used for development or revision of an evaluable
curriculum that will reliably support the preparation of early e.g.,
undergraduate degree programs, middle e.g., upskilling and training programs,
and late e.g., doctoral level training practitioners.Comment: 40 pages; 2 Tables; 4 Figures. Presented at the Symposium on Data
Science & Statistics (SDSS) 202
Supporting Evidence-Informed Teaching in Biomedical and Health Professions Education Through Knowledge Translation: An Interdisciplinary Literature Review
PHENOMENON: The purpose of “systematic” reviews/reviewers of medical and health professions educational research is to identify best practices. This qualitative paper explores the question of whether systematic reviews can support “evidence informed” teaching, and contrasts traditional systematic reviewing with a knowledge-translation approach to this objective.
APPROACH: Degrees of Freedom Analysis is used to examine the alignment of systematic review methods with educational research and the pedagogical strategies and approaches that might be considered with a decision-making framework developed to support valid assessment. This method is also used to explore how knowledge translation can be used to inform teaching and learning.
FINDINGS: The nature of educational research is not compatible with most (11/14) methods for systematic review. The inconsistency of systematic reviewing with the nature of educational research impedes both the identification and implementation of ‘best-evidence’ pedagogy and teaching. This is primarily because research questions that do support the purposes of review do not support educational decision-making. By contrast to systematic reviews of the literature, both a Degrees of Freedom Analysis (DOFA) and knowledge translation (KT) are fully compatible with informing teaching using evidence. A DOFA supports the translation of theory to a specific teaching or learning case, so could be considered a type of KT. The DOFA results in a test of alignment of decision options with relevant educational theory and KT leads to interventions in teaching or learning that can be evaluated. Examples of how to structure evaluable interventions are derived from a knowledge-translation approach that are simply not available from a systematic review.
INSIGHTS: Systematic reviewing of current empirical educational research is not suitable for deriving or supporting best practices in education. However, both “evidence-informed” and scholarly approaches to teaching can be supported as knowledge translation projects, which are inherently evaluable and can generate actionable evidence about whether the decision or intervention worked for students, instructors, and the institution. A Degrees of Freedom Analysis can also support evidence- and theory-informed teaching to develop an understanding of what works, why, and for whom. Thus, knowledge translation, but not systematic reviewing, can support decision-making around pedagogy (and pedagogical innovation) that can also inform new teaching and learning initiatives; it can also point to new avenues of empirical research in education that are informed by, and can inform, theory
Agreement of Immunoassay and Tandem Mass Spectrometry in the Analysis of Cortisol and Free T4: Interpretation and Implications for Clinicians
Objective. To quantify differences in results obtained by immunoassays (IAs) and tandem mass spectrometry (MSMS) for cortisol and free thyroxine (FT4). Design & Patients. Cortisol was measured over 60 minutes following a standard ACTH stimulation test (n = 80); FT4 was measured over time in two cohorts of pregnant (n = 57), and nonpregnant (n = 28) women. Measurements. Samples were analyzed with both IA and MSMS. Results. Results for cortisol by the two methods tended to agree, but agreement weakened over the 60-minute test and was worse for higher (more extreme) concentrations. The results for FT4 depended on the method. IA measurements tended to agree with MSMS measurements when values fell within “normal levels”, but agreement was not constant across trimester in pregnant women and was poorest for the extreme (low/high) concentrations. Correlations between MSMS measurements and the difference between MSMS and IA results were strong and positive (0.411 < r < 0.823; all P < .05). Conclusions. IA and MSMS provide different measures of cortisol and FT4 at extreme levels, where clinical decision making requires the greatest precision. Agreement between the methods is inconsistent over time, is nonlinear, and varies with the analyte and concentrations. IA-based measurements may lead to erroneous clinical decisions
Leveraging guidelines for ethical practice of statistics and computing to develop standards for ethical mathematical practice: A White Paper
We report the results of our NSF-funded project in which we alpha- and beta-
tested a survey comprising all aspects of the ethical practice standards from
two disciplines with relevance to mathematics, the American Statistical
Association (ASA) and Association of Computing Machinery (ACM). Items were
modified so that text such as "A computing professional should..." became, "The
ethical mathematics practitioner...". We also removed elements that were
duplicates or were deemed unlikely to be considered relevant to mathematical
practice even after modification. Starting with more than 100 items, plus 10
demographic questions, the final survey included 52 items (plus demographics),
and 142 individuals responded to our invitations (through listservs and other
widespread emails and announcements) to participate in this 30-minute survey.
This white paper reports the project methods and findings regarding the
community perspective on the 52 items, specifically, which rise to the level of
ethical obligations, which do not meet this level, and what is missing from
this list of elements of ethical mathematical practice. The results suggest
that the community of mathematicians perceives a much wider range of behaviors
to be subject to ethical practice standards than is currently represented.Comment: 48 pages including 6 Tables, plus Appendi
Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results
Background The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. Methods and Findings We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance. Conclusions Our findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies
- …