26 research outputs found
Open science and educational research: an editorial commentary
Educational technology as a broad and applied interdisciplinary research field faces challenges in
achieving consensus on what constitutes good quality research. As the field is embedded in many
other disciplines, considering what evidence matters and the optimal methodologies to conduct
inquiry is continually evolving and maturing. Inhabiting a boundary between education, and computer
science, and viewed through numerous theoretical lenses ranging from disciplines of sociology,
politics, psychology, the learning systems, curriculum development, digital humanities, and beyond,
the number of approaches contributing to the field is vast. The validity, trustworthiness and integrity
of over two decades of research in this domain is continually questioned. Furthermore, as technology
itself also changes, there are differing opinions on how best to explore and understand the role it plays
in education. How we define, research and evaluate our evidence is central to our understanding of
how we learn and how this is enhanced with and through technology in various ways. Whilst scholars
continue to critique and debate the veracity of findings, educational technology journals play an
important role in allowing us to collectively peer review, and publish the best quality research studies.
Changes in the open access publishing world and in the open science movement have the potential to
address some of shortfalls in how our understandings are evaluated, critiqued and judged in this
domain
University of Vermont Community Tobacco Use and Attitudes Survey
Introduction: Smoking remains an important public health issue in U.S. Colleges. 17.3% of U.S. smokers are 18-24 years old. 28% of U.S. college students began smoking at age 19 or older. Currently 1,104 U.S. Colleges have adopted Tobacco-Free policies.https://scholarworks.uvm.edu/comphp_gallery/1216/thumbnail.jp
Editorial: Ireland’s online learning call
The editorial board of the Irish Journal of Technology Enhanced Learning (IJTEL) would like to use this
opportunity to thank each and every one of you working through a very challenging time over the past
twelve months of the pandemic. It is a significant event, a critical incident, that will take some time to
document and reflect upon in future journal editions.
So many words have already been written about this past year that try to capture the disruption and
change. However, to summarise even a scintilla of what has happened across Irish higher education is a
slightly daunting prospect. We have seen various terms used to describe the rapid shift to teaching and
learning online, such as milestone, pivot, emergency remote teaching. None of these fully encompass the
myriad of ways that those of us working in education have had to become resilient, responsive, and
supportive of colleagues during this period.
Considering the response from members of the educational technology community within Ireland, one
could argue that the term overwhelming is a good starting point. For a start, a tsunami of work ensued,
that at times threatened to engulf individuals. Education ‘pivoted’ from a position where online was
generally a supplementary or complementary activity to one where, in an online mode, we became the
campus. Systems and processes were hastily altered, modified or expanded far beyond anybody’s
expectations. While some of those have creaked and groaned, we have managed to teach classes, run
meetings and carry out assessments; run on-campus labs and social distanced teaching; in short, we have
kept going. People have been inventive, innovative and extremely hard working. But above all else, they
have been generous; generous with their time, their expertise and generous in spirit
Editorial: There's an AI for that
In recent months, the educators and higher education institutions have responded with concern, critique, and hope, to the rise of generative artificial intelligence (AI)’s unregulated and mounting influence. Following the period of emergency remote teaching, and the great ‘snapback’ (Jandrić et al., 2022), yet another new concern has emerged, promising to revolutionise education, or threaten its existence. The gravity of the situation has reverberated across the system, as wizardry of predictive pattern recognition fundamentally threatens the validity of long-held practices of summative assessments including essays, and online quizzes. This latest quandary/crisis shows no sign of abating, as venture capitalist funding and language modelling datasets grow. The technology becomes more deeply integrated into word processing, and cloud-based applications through which much of our academic labour is conducted. Understanding and conceptualising new technology within education has long been a necessity, as we wrestle with wrangling tools into our human interactions. Higher education’s relationship with edtech has always been characterised by a cyclical response to disruptive external influences of evolving technology, whose recent developments are often underpinned by neoliberal values of competition, efficiency, market-based solutions, and the privatisation of software platforms. Recent large language model developments are proving no different, with deregulation and the free market serving as the impetus to design and create such tools. Across higher education, educators scramble to decode the GenAI black box, deciphering hallucinations, confabulations, and smooth outputs indistinguishable from original student work. Policy responses range along a continuum of ban or embrace. New AI literacies are being woven into curricula, as change continues apace. 2023 marks a year of existential crisis precipitated by a global pandemic, followed by geopolitical events and a fatigue from the continual adaptation to a new normal. Even within, we are constantly shaping our educational systems. That pull is in many different directions – to accredit, to certify, to help learners become, to socialise, to emancipate, to measure - to meet very diverse purposes and aims. The politics and power structures inherent in our system further affect our response (Kuhn et al., 2023). While the potential of AI chatbots based on natural language processing models is undeniable, it is crucial to discern the reality from the hype and to better understand how our actions and responses are shaping our educational systems in this evolving domain. This editorial examines this dilemma further, to consider the impact on our scholarship of teaching and learning and how we as a community of researchers and educators respond
Protocol: Barriers and facilitators to stakeholder engagement in health guideline development: a qualitative evidence synthesis
Background There is a need for the development of comprehensive, global, evidence-based guidance for stakeholder engagement in guideline development. Stakeholders are any individual or group who is responsible for or affected by health- and healthcare-related decisions. This includes patients, the public, providers of health care and policymakers for example. As part of the guidance development process, Multi-Stakeholder Engagement (MuSE) Consortium set out to conduct four concurrent systematic reviews to summarise the evidence on: (1) existing guidance for stakeholder engagement in guideline development, (2) barriers and facilitators to stakeholder engagement in guideline development, (3) managing conflicts of interest in stakeholder engagement in guideline development and (4) measuring the impact of stakeholder engagement in guideline development. This protocol addresses the second systematic review in the series. Objectives The objective of this review is to identify and synthesise the existing evidence on barriers and facilitators to stakeholder engagement in health guideline development. We will address this objective through two research questions: (1) What are the barriers to multi-stakeholder engagement in health guideline development across any of the 18 steps of the GIN-McMaster checklist? (2) What are the facilitators to multi-stakeholder engagement in health guideline development across any of the 18 steps of the GIN-McMaster checklist? Search Methods A comprehensive search strategy will be developed and peer-reviewed in consultation with a medical librarian. We will search the following databases: MEDLINE, Cumulative Index to Nursing & Allied Health Literature (CINAHL), EMBASE, PsycInfo, Scopus, and Sociological Abstracts. To identify grey literature, we will search the websites of agencies who actively engage stakeholder groups such as the AHRQ, Canadian Institutes of Health Research (CIHR) Strategy for Patient-Oriented Research (SPOR), INVOLVE, the National Institute for Health and Care Excellence (NICE) and the PCORI. We will also search the websites of guideline-producing agencies, such as the American Academy of Pediatrics, Australia's National Health Medical Research Council (NHMRC) and the WHO. We will invite members of the team to suggest grey literature sources and we plan to broaden the search by soliciting suggestions via social media, such as Twitter. Selection Criteria We will include empirical qualitative and mixed-method primary research studies which qualitatively report on the barriers or facilitators to stakeholder engagement in health guideline development. The population of interest is stakeholders in health guideline development. Building on previous work, we have identified 13 types of stakeholders whose input can enhance the relevance and uptake of guidelines: Patients, caregivers and patient advocates; Public; Providers of health care; Payers of health services; Payers of research; Policy makers; Program managers; Product makers; Purchasers; Principal investigators and their research teams; and Peer-review editors/publishers. Eligible studies must describe stakeholder engagement at any of the following steps of the GIN-McMaster Checklist for Guideline Development. Data Collection and Analysis All identified citations from electronic databases will be imported into Covidence software for screening and selection. Documents identified through our grey literature search will be managed and screened using an Excel spreadsheet. A two-part study selection process will be used for all identified citations: (1) a title and abstract review and (2) full-text review. At each stage, teams of two review authors will independently assess all potential studies in duplicate using a priori inclusion and exclusion criteria. Data will be extracted by two review authors independently and in duplicate according to a standardised data extraction form. Main Results The results of this review will be used to inform the development of guidance for multi-stakeholder engagement in guideline development and implementation. This guidance will be official GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group guidance. The GRADE system is internationally recognised as a standard for guideline development. The findings of this review will assist organisations who develop healthcare, public health and health policy guidelines, such as the World Health Organization, to involve multiple stakeholders in the guideline development process to ensure the development of relevant, high quality and transparent guidelines
Recommended from our members
High density genetic mapping identifies new susceptibility loci for rheumatoid arthritis
Summary Using the Immunochip custom single nucleotide polymorphism (SNP) array, designed for dense genotyping of 186 genome wide association study (GWAS) confirmed loci we analysed 11,475 rheumatoid arthritis cases of European ancestry and 15,870 controls for 129,464 markers. The data were combined in meta-analysis with GWAS data from additional independent cases (n=2,363) and controls (n=17,872). We identified fourteen novel loci; nine were associated with rheumatoid arthritis overall and 5 specifically in anti-citrillunated peptide antibody positive disease, bringing the number of confirmed European ancestry rheumatoid arthritis loci to 46. We refined the peak of association to a single gene for 19 loci, identified secondary independent effects at six loci and association to low frequency variants (minor allele frequency <0.05) at 4 loci. Bioinformatic analysis of the data generated strong hypotheses for the causal SNP at seven loci. This study illustrates the advantages of dense SNP mapping analysis to inform subsequent functional investigations
CfA3: 185 Type Ia Supernova Light Curves from the CfA
We present multi-band photometry of 185 type-Ia supernovae (SN Ia), with over
11500 observations. These were acquired between 2001 and 2008 at the F. L.
Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics (CfA).
This sample contains the largest number of homogeneously-observed and reduced
nearby SN Ia (z < 0.08) published to date. It more than doubles the nearby
sample, bringing SN Ia cosmology to the point where systematic uncertainties
dominate. Our natural system photometry has a precision of 0.02 mag or better
in BVRIr'i' and roughly 0.04 mag in U for points brighter than 17.5 mag. We
also estimate a systematic uncertainty of 0.03 mag in our SN Ia standard system
BVRIr'i' photometry and 0.07 mag for U. Comparisons of our standard system
photometry with published SN Ia light curves and comparison stars, where
available for the same SN, reveal agreement at the level of a few hundredths
mag in most cases. We find that 1991bg-like SN Ia are sufficiently distinct
from other SN Ia in their color and light-curve-shape/luminosity relation that
they should be treated separately in light-curve/distance fitter training
samples. The CfA3 sample will contribute to the development of better
light-curve/distance fitters, particularly in the few dozen cases where
near-infrared photometry has been obtained and, together, can help disentangle
host-galaxy reddening from intrinsic supernova color, reducing the systematic
uncertainty in SN Ia distances due to dust.Comment: Accepted to the Astrophysical Journal. Minor changes from last
version. Light curves, comparison star photometry, and passband tables are
available at http://www.cfa.harvard.edu/supernova/CfA3
High-density genetic mapping identifies new susceptibility loci for rheumatoid arthritis.
Using the Immunochip custom SNP array, which was designed for dense genotyping of 186 loci identified through genome-wide association studies (GWAS), we analyzed 11,475 individuals with rheumatoid arthritis (cases) of European ancestry and 15,870 controls for 129,464 markers. We combined these data in a meta-analysis with GWAS data from additional independent cases (n = 2,363) and controls (n = 17,872). We identified 14 new susceptibility loci, 9 of which were associated with rheumatoid arthritis overall and five of which were specifically associated with disease that was positive for anticitrullinated peptide antibodies, bringing the number of confirmed rheumatoid arthritis risk loci in individuals of European ancestry to 46. We refined the peak of association to a single gene for 19 loci, identified secondary independent effects at 6 loci and identified association to low-frequency variants at 4 loci. Bioinformatic analyses generated strong hypotheses for the causal SNP at seven loci. This study illustrates the advantages of dense SNP mapping analysis to inform subsequent functional investigations