10 research outputs found
Making collapsing pulse user-friendly
Collapsing pulse is generally elicited by elevating the patient’s arm. However, the pulse becoming stronger on arm elevation is a physiological phenomenon, which is bound to create confusion, if routine lifting of the arm in search of collapsing pulse is practiced. The name ‘collapsing pulse’ represents only the second component of this sign. It masks the more important first component - the slapping, bounding upstroke, characterised by its other name ‘water-hammer pulse’. It is possible to elicit this sign by appreciating the slapping character on routine pulse examination. The insistence on arm lifting in medical school teaching is better avoided
Very short answer questions : a viable alternative to multiple choice questions
Background: Multiple choice questions, used in medical school assessments for decades, have many drawbacks
such as hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates.
Directly asked, directly answered questions like Very Short Answer Questions (VSAQ) are considered a better
alternative with several advantages.
Objectives: This study aims to compare student performance in MCQ and VSAQ and obtain feedback.
from the stakeholders.
Methods: Conduct multiple true-false, one best answer, and VSAQ tests in two batches of medical students,
compare their scores and psychometric indices of the tests and seek opinion from students and academics
regarding these assessment methods.
Results: Multiple true-false and best answer test scores showed skewed results and low psychometric performance
compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders’
opinions were significantly in favour of VSAQ.
Conclusion and recommendation: This study concludes that VSAQ is a viable alternative to multiple-choice
question tests, and it is widely accepted by medical students and academics in the medical faculty
From Item Analysis to Assessment Analysis: Introducing New Formulae
Item analysis of individual multiple choice questions have been widely used for several decades. But formulae
for analysis of manually-marked assessments are lacking. Evaluation and comparison of such assessments used
in medical schools remain a guess work. In this study we have introduced new formulae aligned with item
analysis formulae, which can be used to analyse all assessment methods. While the existing formulae use a
binary (pass/fail) criterion, the new formulae have used actual scores. While the existing formulae use the
scores of a single assessment to rank the candidates to determine the high and low scorers for calculating the
discrimination index (DISi), the new formula used the grand total scores of the entire examination for it. It is
claimed that the new formulae, utilizing actual scores, would make the indexes more realistic. Eight
examinations, each comprising nine assessments, were used to validate this claim. The DISi and Difficulty
Index (DIFi) were calculated using the binary formulae and the new formulae. Comparisons were made using
parametric and non-parametric tests. The ensued positive correlation of indexes indicated that the new formulae
are feasible, realistic and easy to apply. More replicates are required to prove their validity and reliability
Rejuvenating multiple true–false: Proposing fairer scoring methods
Introduction: Multiple true–false tests (MTF) with penalty
scoring consistently delivered low scores and many failures
for over two decades in our medical faculty. This issue
remained unaddressed, as the overall student performance
was redeemed by other assessments like Best Answer
Questions and Modified Essay Questions. The post-test item
analyses revealed that there were several items with
unacceptable difficulty index and discrimination index,
many omissions, and that the false options performed worse
than the true options in the difficulty index but better in the
discrimination index. This study aimed to evaluate some
final professional examination MTF papers to propose
possible remedial measures.
Materials and Methods: We examined 5 years’ final
professional examination MTF results, their item analysis,
the student performance in true and false items and failure
rates.We explored the impact of excluding the flawed
questions post-test based on item analysis and redoing the
scores. We also explored the effect of removing the penalty
scoring and recalculating the scores.
Results: The two new scoring methods, such as postweeding
recalculation and no-penalty proportionate scoring,
showed remarkable improvement in scores and also
reduced the failure rates significantly compared to the
penalty-scoring model.
Conclusion: We propose two new scoring methods for MTF,
which would be fairer to the students and would have the
prospect of rejuvenating MTF tests
How to grade items for a question bank and rank tests based on student performance
Introduction: Not utilising post-test psychometric analyses of questions, and not maintaining a question bank seemed to adversely affect the quality of tests and increase the workload of academicians, as they are required to write fresh questions for all examinations. The literature review did not reveal any gold standard method for creating a question bank. This study has formulated criteria for recruiting multiple choice questions for a question bank and introduced a formula to rank whole tests.
Methods: We collected used question papers of multiple true-false questions (MCQ) and one best answer questions (BAQ) and got two experienced academicians to scrutinize them and identify items with flaws. The flawless items were counted in each test and their test performance index (TPi) determined. The psychometric item analysis reports of the tests were analysed to enlist bankable items. The TPi of the tests were also calculated by this method. The TPi derived by expert opinions were compared with those obtained by objective criteria using the Pearson Correlation Coefficient and the Spearman’s rho.
Results: Judgements by two experts showed a positive correlation, so also expert judgements against the objective formulae. Omission rates in MTF items showed a highly significant negative correlation with difficulty index, falling short of a perfect -1, which supported including omission index in their triple formula. The mean number of functioning distractors (FD) per BAQ item was1.87 (SD 1.14), which supported ≥2 FD per item in their triple formula.
Conclusion: Expert judgments in question vetting is essential. However, objective post-test scrutiny of items using difficulty and discrimination indexes enhanced with omission index and distractor efficiency to recruit items for question banks are required. Test Performance Index will be a useful metric to rank the tests
Moving from long case to scenario-based clinical examination: Proposals for making it feasible.
Introduction: Our faculty used one long case (LC) and three
short cases for the clinical component of the final
professional examinations. During the COVID-19 pandemic,
the LC had to be replaced with scenario-based clinical
examination (SBCE) due to the impracticability of using
recently hospitalised patients. While keeping the short case
component as usual, the LC had to be replaced with SBCE
in 2020 for the first time at a short notice. To evaluate the positive and negative aspects of SBCE and LC to determine the feasibility of replacing LC with SBCE in future examinations.
Materials and methods: We compared the LC scores of three
previous years with those of the SBCE and studied the
feedback of the three stakeholders: students, examiners,
and simulated patients (SPs), regarding their experience
with SBCE and the suitability of SBCE as an alternative for
LC in future examinations.
Results: The SBCE scores were higher than those of the LC.
Most of the examiners and students were not in favour of
SBCE replacing LC, as such. The SPs were more positive
about the proposition. The comments of the three
stakeholders brought out the plus and minus points of LC
and SBCE, which prompted our proposals to make SBCE
more practical for future examinations.
Conclusion: Having analysed the feedback of the
stakeholders, and the positive and negative aspects of LC
and SBCE, it was evident that SBCE needed improvements.
We have proposed eight modifications to SBCE to make it a
viable alternative for LC
Making physical examination in medicine user-friendly
Physical examination (PE) techniques used in medical schools appear redundant in several aspects: unnecessarily regimental, lacking in efficiency, and lengthy. Many techniques are sustained solely because of the age-old tradition. This commentary suggests a simplification of PE techniques to make them acceptable to all the stakeholders, such as patients, medical students, and medical teachers. This is especially relevant in this era when imaging is widely used for diagnosis, and the confidence and reliance on PE are declining. Opinions of 10 senior consultants active in medical practice, teaching, and assessment were sought to know their concurrence with the authors’ views. Seven of them provided their opinions, which showed considerable agreement with the authors’ views regarding PE. All the items presented in this paper are mostly supported by the opinions of the senior consultants, textbooks, and literature. We consider sharing this work with the fraternity worthwhile
Dropping the non-core subjects from undergraduate final professional examination: How it would impact the results
Introduction: Observing the dearth of distinctions in the two decades of final professional medical examinations (FPE)
caused concern. Multiple True False (MTF) tests with penalty
scoring pulling down the scores was considered one
reason. Another possible reason was having too many
subjects covered in the MTF and Best Answer Question
(BAQ) papers. This study aimed to explore the impact of
dropping the non-core subjects with minimal inputs from
MTF and BAQ papers and the students’ views in this regard.
Materials and Methods: We examined the students’
performance in the core and non-core subjects in MTF and
BAQ papers and the impact of dropping the non-core
subjects’ contribution to the students’ scores of the recent
four final professional examinations. We also surveyed the
opinions of the students, who took the FPE in the year 2000.
Results: The failure rates were significantly higher in noncore than core subjects (p < 0.001) except in one MTF paper. The mean scores were significantly lower in non-core than core subjects in all the four FPEs (p < 0.05) except in one MTF paper. Dropping the non-core subject items from MTF
and BAQ showed an improvement in the scores of MTF,
theory total, and most grand totals resulting in two more
students reaching distinction status. A mere 3.8% of the
students could thoroughly revise the non-core subjects
before the FPE. Two-fifth of them believed that non-core
subjects had a significant impact on theory performance.
Only 31.5% favoured dropping the non-core subjects, and an
equal number preferred a status quo, while the rest
suggested a reduction in their weightage. Conclusion: Most of the students considered the non-core subjects important in their career. However, very few of them
could revise these subjects for the professional
examination. The study demonstrated that dropping the
non-core subjects from MTF and BAQ improved the
students’ final scores and helped more students to attain
distinction status
Moving from long case to scenario-based clinical examination: Proposals for making it feasible
Introduction: Our faculty used one long case (LC) and three
short cases for the clinical component of the final
professional examinations. During the COVID-19 pandemic,
the LC had to be replaced with scenario-based clinical
examination (SBCE) due to the impracticability of using
recently hospitalised patients. While keeping the short case
component as usual, the LC had to be replaced with SBCE
in 2020 for the first time at a short notice. To evaluate the
positive and negative aspects of SBCE and LC to determine
the feasibility of replacing LC with SBCE in future
examinations.
Materials and methods: We compared the LC scores of three
previous years with those of the SBCE and studied the
feedback of the three stakeholders: students, examiners,
and simulated patients (SPs), regarding their experience
with SBCE and the suitability of SBCE as an alternative for
LC in future examinations.
Results: The SBCE scores were higher than those of the LC.
Most of the examiners and students were not in favour of
SBCE replacing LC, as such. The SPs were more positive
about the proposition. The comments of the three
stakeholders brought out the plus and minus points of LC
and SBCE, which prompted our proposals to make SBCE
more practical for future examinations.
Conclusion: Having analysed the feedback of the
stakeholders, and the positive and negative aspects of LC
and SBCE, it was evident that SBCE needed improvements.
We have proposed eight modifications to SBCE to make it a
viable alternative for LC