18 research outputs found
Classification of current anticancer immunotherapies
During the past decades, anticancer immunotherapy has evolved from a promising
therapeutic option to a robust clinical reality. Many immunotherapeutic regimens are
now approved by the US Food and Drug Administration and the European Medicines
Agency for use in cancer patients, and many others are being investigated as standalone
therapeutic interventions or combined with conventional treatments in clinical
studies. Immunotherapies may be subdivided into “passive” and “active” based on
their ability to engage the host immune system against cancer. Since the anticancer
activity of most passive immunotherapeutics (including tumor-targeting monoclonal
antibodies) also relies on the host immune system, this classification does not properly
reflect the complexity of the drug-host-tumor interaction. Alternatively, anticancer
immunotherapeutics can be classified according to their antigen specificity. While some
immunotherapies specifically target one (or a few) defined tumor-associated antigen(s),
others operate in a relatively non-specific manner and boost natural or therapy-elicited
anticancer immune responses of unknown and often broad specificity. Here, we propose
a critical, integrated classification of anticancer immunotherapies and discuss the clinical
relevance of these approaches
Advantages and pitfalls in utilizing artificial intelligence for crafting medical examinations: a medical education pilot study with GPT-4
Abstract Background The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable. Methods During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities. Results After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning. Conclusion GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal