41 research outputs found
Interventions and assessment tools addressing key concepts people need to know to appraise claims about treatment effects: a systematic mapping review
Overview of all included studies. (XLSX 133 kb
Developing digital contact tracing tailored to haulage in East Africa to support COVID-19 surveillance: a protocol
International audienceIntroduction At the peak of Ugandaâs first wave of SARS-CoV-2 in May 2020, one in three COVID-19 cases was linked to the haulage sector. This triggered a mandatory requirement for a negative PCR test result at all ports of entry and exit, resulting in significant delays as haulage drivers had to wait for 24â48âhours for results, which severely crippled the regional supply chain. To support public health and economic recovery, we aim to develop and test a mobile phone-based digital contact tracing (DCT) tool that both augments conventional contact tracing and also increases its speed and efficiency. Methods and analysis To test the DCT tool, we will use a stratified sample of haulage driver journeys, stratified by route type (regional and local journeys). We will include at least 65% of the haulage driver journeys ~83â200 on the network through Uganda. This allows us to capture variations in user demographics and socioeconomic characteristics that could influence the use and adoption of the DCT tool. The developed DCT tool will include a mobile application and web interface to collate and intelligently process data, whose output will support decision-making, resource allocation and feed mathematical models that predict epidemic waves. The main expected result will be an open source-tested DCT tool tailored to haulage use in developing countries. This study will inform the safe deployment of DCT technologies needed for combatting pandemics in low-income countries. Ethics and dissemination This work has received ethics approval from the School of Public Health Higher Degrees, Research and Ethics Committee at Makerere University and The Uganda National Council for Science and Technology. This work will be disseminated through peer-reviewed publications, our websites https://project-thea.org/ and Github for the open source code https://github.com/project-thea/
Developing digital contact tracing tailored to haulage in East Africa to support COVID-19 surveillance
Prioritising Informed Health Choices Key Concepts for those impacted by cancer: a protocol [version 1; peer review: 2 approved]
Background: Few areas of health have been as insidiously influenced by misinformation as cancer. Thus, interventions that can help people impacted by cancer reduce the extent to which they are victims of misinformation are necessary. The Informed Health Choices (IHC) initiative has developed Key Concepts that can be used in the development of interventions for evaluating the trustworthiness of claims about the effects of health treatments. We are developing an online education programme called Informed Health Choices-Cancer (IHC-C) based on the IHC Key Concepts. We will provide those impacted by cancer with the knowledge and skills necessary to think critically about the reliability of health information and claims and make informed choices. Methods: We will establish a steering group (SG) of 12 key stakeholders, including oncology specialists and academics. In addition, we will establish a patient and public involvement (PPI) panel of 20 people impacted by cancer. After training the members on the Key Concepts and the prioritisation process, we will conduct a two-round prioritisation process. In the first round, 12 SG members and four PPI panel members will prioritise Key Concepts for inclusion. In the second round, the remaining 16 PPI members will undertake the prioritisation based on the prioritised Key Concepts from the first round. Participants in both rounds will use a structured judgement form to rate the importance of the Key Concepts for inclusion in the online IHC-C programme. A consensus meeting will be held, where members will reach a consensus on the Key Concepts to be included and rank the order in which the prioritised Key Concepts will be addressed in the IHC-C programme. Conclusions: At the end of this process, we will identify which Key Concepts should be included and the order in which they should be addressed in the IHC-C programme
Establishing a library of resources to help people understand key concepts in assessing treatment claimsâThe âCritical thinking and Appraisal Resource Libraryâ (CARL)
Background
People are frequently confronted with untrustworthy claims about the effects of treatments. Uncritical acceptance of these claims can lead to poor, and sometimes dangerous, treatment decisions, and wasted time and money. Resources to help people learn to think critically about treatment claims are scarce, and they are widely scattered. Furthermore, very few learning-resources have been assessed to see if they improve knowledge and behavior.
Objectives
Our objectives were to develop the Critical thinking and Appraisal Resource Library (CARL). This library was to be in the form of a database containing learning resources for those who are responsible for encouraging critical thinking about treatment claims, and was to be made available online. We wished to include resources for groups we identified as âintermediariesâ of knowledge, i.e. teachers of schoolchildren, undergraduates and graduates, for example those teaching evidence-based medicine, or those communicating treatment claims to the public. In selecting resources, we wished to draw particular attention to those resources that had been formally evaluated, for example, by the creators of the resource or independent research groups.
Methods
CARL was populated with learning-resources identified from a variety of sourcesâtwo previously developed but unmaintained inventories; systematic reviews of learning-interventions; online and database searches; and recommendations by members of the project group and its advisors. The learning-resources in CARL were organised by âKey Conceptsâ needed to judge the trustworthiness of treatment claims, and were made available online by the James Lind Initiative in Testing Treatments interactive (TTi) English (www.testingtreatments.org/category/learning-resources).TTi English also incorporated the database of Key Concepts and the Claim Evaluation Tools developed through the Informed Healthcare Choices (IHC) project (informedhealthchoices.org).
Results
We have created a database of resources called CARL, which currently contains over 500 open-access learning-resources in a variety of formats: text, audio, video, webpages, cartoons, and lesson materials. These are aimed primarily at âIntermediariesâ, that is, âteachersâ, âcommunicatorsâ, âadvisorsâ, âresearchersâ, as well as for independent âlearnersâ. The resources included in CARL are currently accessible at www.testingtreatments.org/category/learning-resources
Conclusions
We hope that ready access to CARL will help to promote the critical thinking about treatment claims, needed to help improve healthcare choices
Reflections on experiences in doctoral training and its contribution to knowledge translation in an African environment
Capacity building efforts in knowledge translation (KT) in Africa is the focus of this report. Translation of research into policy is an ongoing challenge. Five doctoral students registered in Uganda at the Makerere University PhD program contribute their perspectives through focus group discussions and interviews. This is a detailed report that discusses challenges of young researchers. All of the student respondents gained skills related to research (such as qualitative and quantitative methodologies) and/or KT (writing policy briefs, plain language summaries, blogs, and facilitating stakeholder dialogues) and recognized these as major achievements of their training
Measuring ability to assess claims about treatment effects: a latent trait analysis of items from the 'Claim Evaluation Tools' database using Rasch modelling
Background: The Claim Evaluation Tools database contains multiple-choice items for measuring peopleâs ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable.
Objectives: To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis.
Participants: We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis.
Results: Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty.
Conclusion: Most of the items conformed well to the Rasch modelâs expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims
Measuring ability to assess claims about treatment effects: the development of the 'Claim Evaluation Tools'
Objectives: To describe the development of the Claim Evaluation Tools, a set of flexible items to measure people's ability to assess claims about treatment effects.
Setting: Methodologists and members of the community (including children) in Uganda, Rwanda, Kenya, Norway, the UK and Australia.
Participants: In the iterative development of the items, we used purposeful sampling of people with training in research methodology, such as teachers of evidence-based medicine, as well as patients and members of the public from low-income and high-income countries. Development consisted of 4 processes: (1) determining the scope of the Claim Evaluation Tools and development of items; (2) expert item review and feedback (n=63); (3) cognitive interviews with children and adult end-users (n=109); and (4) piloting and administrative tests (n=956).
Results: The Claim Evaluation Tools database currently includes a battery of multiple-choice items. Each item begins with a scenario which is intended to be relevant across contexts, and which can be used for children (from age 10â
and above), adult members of the public and health professionals. People with expertise in research methods judged the items to have face validity, and end-users judged them relevant and acceptable in their settings. In response to feedback from methodologists and end-users, we simplified some text, explained terms where needed, and redesigned formats and instructions.
Conclusions: The Claim Evaluation Tools database is a flexible resource from which researchers, teachers and others can design measurement instruments to meet their own requirements. These evaluation tools are being managed and made freely available for non-commercial use (on request) through Testing Treatments interactive (testingtreatments.org)
What should the standard be for passing and mastery on the Critical Thinking about Health Test? A consensus study
Objective Most health literacy measures rely on subjective self-assessment. The Critical Thinking about Health Test is an objective measure that includes two multiple-choice questions (MCQs) for each of the nine Informed Health Choices Key Concepts included in the educational resources for secondary schools. The objective of this study was to determine cut-off scores for passing (the border between having and not having a basic understanding and the ability to apply the nine concepts) and mastery (the border between having mastered and not having mastered them).Design Using a combination of two widely used methods: Angoffâs and Nedelskyâs, a panel judged the likelihood that an individual on the border of passing and another on the border of having mastered the concepts would answer each MCQ correctly. The cut-off scores were determined by summing up the probability of answering each MCQ correctly. Their independent assessments were summarised and discussed. A nominal group technique was used to reach a consensus.Setting The study was conducted in secondary schools in East Africa.Participants The panel included eight individuals with 5 or more yearsâ experience in the following areas: evaluation of critical thinking interventions, curriculum development, teaching of lower secondary school and evidence-informed decision-making.Results The panel agreed that for a passing score, students had to answer 9 of the 18 questions and for a mastery score, 14 out of 18 questions correctly.Conclusion There was wide variation in the judgements made by individual panel members for many of the questions, but they quickly reached a consensus on the cut-off scores after discussions