59 research outputs found

    Immunological response to <i>Brucella abortus<i> strain 19 vaccination of cattle in a communal area in South Africa

    Get PDF
    Brucellosis is of worldwide economic and public health importance. Heifer vaccination with live attenuated Brucella abortus strain 19 (S19) is the cornerstone of control in low- and middle-income countries. Antibody persistence induced by S19 is directly correlated with the number of colony-forming units (CFU) per dose. There are two vaccination methods: a ‘high’ dose (5–8 × 1010 CFU) subcutaneously injected or one or two ‘low’ doses (5 × 109 CFU) through the conjunctival route. This study aimed to evaluate serological reactions to the ‘high’ dose and possible implications of the serological findings on disease control. This study included 58 female cases, vaccinated at Day 0, and 29 male controls. Serum was drawn repeatedly and tested for Brucella antibodies using the Rose Bengal Test (RBT) and an indirect enzyme-linked immunosorbent assay (iELISA). The cases showed a rapid antibody response with peak RBT positivity (98%) at 2 weeks and iELISA (95%) at 8 weeks, then decreased in an inverse logistic curve to 14% RBT and 32% iELISA positive at 59 weeks and at 4.5 years 57% (4/7 cases) demonstrated a persistent immune response (RBT, iELISA or Brucellin skin test) to Brucella spp. Our study is the first of its kind documenting the persistence of antibodies in an African communal farming setting for over a year to years after ‘high’ dose S19 vaccination, which can be difficult to differentiate from a response to infection with wild-type B. abortus. A recommendation could be using a ‘low’ dose or different route of vaccination

    An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Four- or five-option multiple choice questions (MCQs) are the standard in health-science disciplines, both on certification-level examinations and on in-house developed tests. Previous research has shown, however, that few MCQs have three or four functioning distractors. The purpose of this study was to investigate non-functioning distractors in teacher-developed tests in one nursing program in an English-language university in Hong Kong.</p> <p>Methods</p> <p>Using item-analysis data, we assessed the proportion of non-functioning distractors on a sample of seven test papers administered to undergraduate nursing students. A total of 514 items were reviewed, including 2056 options (1542 distractors and 514 correct responses). Non-functioning options were defined as ones that were chosen by fewer than 5% of examinees and those with a positive option discrimination statistic.</p> <p>Results</p> <p>The proportion of items containing 0, 1, 2, and 3 functioning distractors was 12.3%, 34.8%, 39.1%, and 13.8% respectively. Overall, items contained an average of 1.54 (SD = 0.88) functioning distractors. Only 52.2% (n = 805) of all distractors were functioning effectively and 10.2% (n = 158) had a choice frequency of 0. Items with more functioning distractors were more difficult and more discriminating.</p> <p>Conclusion</p> <p>The low frequency of items with three functioning distractors in the four-option items in this study suggests that teachers have difficulty developing plausible distractors for most MCQs. Test items should consist of as many options as is feasible given the item content and the number of plausible distractors; in most cases this would be three. Item analysis results can be used to identify and remove non-functioning distractors from MCQs that have been used in previous tests.</p

    How well do adolescents recall use of mobile telephones? Results of a validation study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the last decade mobile telephone use has become more widespread among children. Concerns expressed about possible health risks have led to epidemiological studies investigating adverse health outcomes associated with mobile telephone use. Most epidemiological studies have relied on self reported questionnaire responses to determine individual exposure. We sought to validate the accuracy of self reported adolescent mobile telephone use.</p> <p>Methods</p> <p>Participants were recruited from year 7 secondary school students in Melbourne, Australia. Adolescent recall of mobile telephone use was assessed using a self administered questionnaire which asked about number and average duration of calls per week. Validation of self reports was undertaken using Software Modified Phones (SMPs) which logged exposure details such as number and duration of calls.</p> <p>Results</p> <p>A total of 59 adolescents participated (39% boys, 61% girls). Overall a modest but significant rank correlation was found between self and validated number of voice calls (ρ = 0.3, P = 0.04) with a sensitivity of 57% and specificity of 66%. Agreement between SMP measured and self reported duration of calls was poorer (ρ = 0.1, P = 0.37). Participants whose parents belonged to the 4<sup>th </sup>socioeconomic stratum recalled mobile phone use better than others (ρ = 0.6, P = 0.01).</p> <p>Conclusion</p> <p>Adolescent recall of mobile telephone use was only modestly accurate. Caution is warranted in interpreting results of epidemiological studies investigating health effects of mobile phone use in this age group.</p

    Global quantitative indices reflecting provider process-of-care: data-base derivation

    Get PDF
    Background: Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. Methods: A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35), for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i) the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC), reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days), reflecting the time to maximum rate of hospital discharge; and (ii) individual patient ability to optimize output (as length-of-stay) for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum)1]), via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. Results: The data-set consisted of 223129 patients from 99 ICUs with mean (SD) age and APACHE III score of 59.2(18.9) years and 52.7(30.6) respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (≥ 7.8 days) and TE (≥ 0.74) were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (≥ 4.2 days) and TE (≥ 0.69) were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysisof- variance, P ≤ 0.0001). Total explained variance, for survivors (0.89) and non-survivors (0.89), was maximized by combinations of indices demonstrating a low correlation with mortality probability. Conclusions: Global indices reflecting process of care may be formally established at the level of national patient databases. These indices appear orthogonal to mortality outcome.John L Moran, Patricia J Solomon and the Adult Database Management Committee (ADMC) of the Australian and New Zealand Intensive Care Society (ANZICS

    Time to Surgery Following Short-Course Radiotherapy in Rectal Cancer and its Impact on Postoperative Outcomes. A Population-Based Study Across the English National Health Service, 2009–2014

    No full text
    Aims Preoperative short-course radiotherapy (SCRT) is an important treatment option for rectal cancer. The length of time between completing SCRT and surgery may influence postoperative outcomes, but the evidence available to determine the optimal interval is limited and often conflicting. Materials and methods Information was extracted from a colorectal cancer data repository (CORECT-R) on all surgically treated rectal cancer patients who received SCRT in the English National Health Service between April 2009 and December 2014. The time from radiotherapy to surgery was described across the population. Thirty-day postoperative mortality, returns to theatre, length of stay and 1-year survival were investigated in relation to the interval between radiotherapy and surgery. Results Within the cohort of 3469 patients, the time to surgery was 0–7 days for 76% of patients, 8–14 days for 19% of patients and 15–27 days for 5% of patients. There was a clear variation in relation to different patient characteristics. There was, however, no evidence of differences in postoperative outcomes in relation to interval length. Conclusions This study suggests that the time interval between SCRT and surgery does not influence postoperative outcomes up to a year after surgery. The study provides population-level, real-world evidence to complement that from clinical trials
    corecore