14 research outputs found

    The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study

    Get PDF
    AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease

    Competency Assessment

    No full text
    Assessment is an essential feature of the competency-based educational model because only by means of evaluation can we verify achievement of specified learning outcomes. This is especially important in the context of health professions education, where the competencies of interest impact the well-being of patients. Therefore, just as with planning the instructional component of a curriculum, development of an assessment system must start with the specification of desired learning outcomes in the form of knowledge, skills, and attitudes expected of trainees or practitioners in order to provide safe and effective patient care. Issues to consider when judging the quality of evaluation methods include the reliability of data generated by the assessment, validity of decisions based on test results, educational impact on individuals undergoing evaluation and other stakeholders, and the feasibility of implementing the assessment system. In addition to these criteria and the particular competencies to be evaluated, the choice of testing methods from among numerous available techniques should consider multiple dimensions, such as appropriate level of assessment, stage of learner development, and, very importantly, overall purpose and context of the assessment. Ultimately, no one method can assess all aspects of professional competence, but familiarity with strengths and limitations of various modalities can guide the development of appropriate assessment systems. Strengths of simulation-based methods for evaluative purposes include the ability to assess actual performance of psychomotor skills and demonstration of nontechnical professional competencies in environments that safely and authentically mirror real practice settings. In addition, the programmability of simulations permits on-demand testing of rare but important clinical situations and consistent presentation of evaluation problems to multiple examinees; this reproducibility becomes especially important when high-stakes decisions are contingent upon such assessments

    Progress testing in postgraduate medical education

    No full text
    Item does not contain fulltextBACKGROUND: The role of knowledge in postgraduate medical education has often been discussed. However, recent insights from cognitive psychology and the study of deliberate practice recognize that expert problem solving requires a well-organized knowledge database. This implies that postgraduate assessment should include knowledge testing. Longitudinal assessment, like progress testing, seems a promising approach for postgraduate progress knowledge assessment. AIMS: To evaluate the validity and reliability of a national progress test in postgraduate Obstetrics and Gynaecology training. METHODS: Data of 10 years of postgraduate progress testing were analyzed on reliability with Cronbach's alpha and on construct validity using one-way ANOVA with a post hoc Scheffe test. RESULTS: Average reliability with true-false questions was 0.50, which is moderate at best. After the introduction of multiple-choice questions average reliability improved to 0.65. Construct validity or discriminative power could only be demonstrated with some certainty between training year 1 and training year 2 and higher training years. CONCLUSION: Validity and reliability of the current progress test in postgraduate Obstetrics and Gynaecology training is unsatisfactory. Suggestions for improvement of both test construct and test content are provided

    An analysis of peer, self, and tutor assessment in problem-based learning tutorials

    No full text
    Objective: The purpose of this study was to explore self-, peer-, and tutor assessment of performance in tutorials among first year medical students in a problem-based learning curriculum. Methods: One hundred and twenty- five students enrolled in the first year of the Bachelor of Medicine and Bachelor of Surgery Program at the University of Queensland were recruited to participate in a study of metacognition and peer- and self- assessment. Both quantitative and qualitative data were collected from the assessment of PBL performance within the tutorial setting, which included elements such as responsibility and respect, communication, and critical analysis through presentation of a case summary. Self-, peer-, and tutor assessment took place concurrently. Results: Scores obtained from tutor assessment correlated poorly with self- assessment ratings (r = 0.31 - 0.41), with students consistently under- marking their own performance to a substantial degree. Students with greater self-efficacy, scored their PBL performance more highly. Peer- assessment was a slightly more accurate measure, with peer- averaged scores correlating moderately with tutor ratings initially (r = 0.40) and improving over time (r = 0.60). Students consistently over-marked their peers, particularly those with sceptical attitudes to the peer-assessment process. Peer over-marking led to less divergence from the tutor scoring than under-marking of one's own work. Conclusion: According to the results of this study, first-year medical students in a problem-based learning curriculum were better able to accurately judge the performance of their peers compared to their own performance. This study has shown that self-assessment of process is not an accurate measure, in line with the majority of research in this domain. Nevertheless, it has an important role to play in supporting the development of skills in reflection and self-awareness
    corecore