76 research outputs found

    Using examination performance data and focus groups to inform teaching – a case study from final year students of veterinary medicine

    Get PDF
    Background: Student feedback has played an important role in the maintenance of quality and standards in higher education. Perhaps the most commonly used method to capture feedback is a series of questions or statements where students indicate their degree of satisfaction or agreement. Focus groups offer an alternative means of capturing ‘richer’ qualitative data relating to students’ thoughts on course structure. Aside from student evaluations, student examination performance has been used as a method to evaluate the efficacy of curriculum changes at programme level. However, this data is utilised less so at a ‘finer detail’ level to identify specific issues with the delivery of teaching. Case presentation: The purpose of this report was to outline the approach taken using qualitative and quantitative data to identify problems with a specific area of teaching, inform a new teaching approach and to assess the impact of those changes. Following quantitative and qualitative analysis, a practical class on dairy herd fertility performance was highlighted as an area for improvement. After the introduction of the newly formatted practical class with a greater focus on self-directed learning, there was a significant increase in the average score (p < 0.001) and a decrease in the proportion of students failing (p < 0.001) the question that assessed the analysis of dairy herd fertility data. In addition, the R-squared value between students’ performance in the fertility question and their performance in the overall examination increased from 0.06 to 0.11. Conclusions: The combination of qualitative focus group data and quantitative analysis of examination performance data represent robust methods for identifying problems associated with specific aspects of veterinary teaching

    Value of simplified lung lesions scoring systems to inform future codes for routine meat inspection in pigs

    Get PDF
    peer-reviewedBackground Across the European Union (EU), efforts are being made to achieve modernisation and harmonisation of meat inspection (MI) code systems. Lung lesions were prioritised as important animal based measures at slaughter, but existing standardized protocols are difficult to implement for routine MI. This study aimed to compare the informative value and feasibility of simplified lung lesion scoring systems to inform future codes for routine post mortem MI. Results Data on lung lesions in finisher pigs were collected at slaughter targeting 83 Irish pig farms, with 201 batches assessed, comprising 31,655 pairs of lungs. Lungs were scored for cranioventral pulmonary consolidations (CVPC) and pleurisy lesions using detailed scoring systems, which were considered the gold standard. Using the data collected, scenarios for possible simplified scoring systems to record CVPC (n = 4) and pleurisy (n = 4) lesions were defined. The measurable outcomes were the prevalence and (if possible) severity scoring at batch level for CVPC and pleurisy. An arbitrary threshold was set to the upper quartile (i.e., the top 25% of batches with high prevalence/severity of CVPC or pleurisy, n = 50). Each pair of measurable outcomes was compared by calculating Spearman rank correlations and assessing if batches above the threshold for one measurable outcome were also above it for their pairwise comparison. All scenarios showed perfect agreement (k = 1) when compared among themselves and the gold standard for the prevalence of CVPC. The agreement among severity outcomes and the gold standard showed moderate to perfect agreement (k = [0.66, 1]). The changes in ranking were negligible for all measurable outcomes of pleurisy for scenarios 1, 2 and 3 when compared with the gold standard (rs ≥ 0.98), but these changes amounted to 50% for scenario 4. Conclusions The best simplified CVPC scoring system is to simply count the number of lung lobes affected excluding the intermediate lobe, which provides the best trade-off between value of information and feasibility, by incorporating information on CVPC prevalence and severity. While for pleurisy evaluation, scenario 3 is recommended. This simplified scoring system provides information on the prevalence of cranial and moderate and severe dorsocaudal pleurisy. Further validation of the scoring systems at slaughter and by private veterinarians and farmers is needed

    Relative importance of herd-level risk factors for probability of infection with paratuberculosis in Irish dairy herds

    Get PDF
    Control of paratuberculosis is challenging due to the relatively poor performance of diagnostic tests, a prolonged incubation period, and protracted environmental survival. Prioritization of herd-level interventions is not possible because putative risk factors are often not supported by risk factor studies. The objective for this study was to investigate the relative importance of risk factors for an increased probability of herd paratuberculosis infection. Risk assessment data, comprehensive animal purchase history, and diagnostic test data were available for 936 Irish dairy herds. Both logistic regression and a Bayesian β regression on the outcome of a latent class analysis were conducted. Population attributable fractions and proportional reduction in variance explained were calculated for each variable in the logistic and Bayesian models, respectively. Routine use of the calving area for sick or lame cows was found to be a significant explanatory covariate in both models. Purchasing behavior for the previous 10 yr was not found to be significant. For the logistic model, length of time calves spend in the calving pen (25%) and routine use of the calving pen for sick or lame animals (14%) had the highest attributable fractions. For the Bayesian model, the overall R2 was 16%. Dry cow cleanliness (7%) and routine use of the calving area for sick or lame cows (6%) and had the highest proportional reduction in variance explained. These findings provide support for several management practices commonly recommended as part of paratuberculosis control programs; however, a large proportion of the observed variation in probability of infection remained unexplained, suggesting other important risks factors may exist

    Risk factors for SARS-CoV-2 infection in healthcare workers following an identified nosocomial COVID-19 exposure during waves 1-3 of the pandemic in Ireland

    Get PDF
    Healthcare workers (HCWs) have increased exposure and subsequent risk of infection with severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2). This case-control study was conducted to investigate the contemporaneous risks associated with confirmed SARS-CoV-2 infection amongst HCWs following in-work exposure to a confirmed coronavirus disease-2019 (COVID-19) case. We assessed the influence of demographic (age, sex, nationality, high risk co-morbidities and vaccination status) and work-related factors (job role, exposure location, contact type, personal protective equipment (PPE) use) on infection risk following nosocomial SARS-CoV-2 exposure. All contact tracing records within the hospital site during waves 1-3 of the COVID-19 pandemic in Ireland were screened to identify exposure events, cases and controls. In total, 285 cases and 1526 controls were enrolled, as a result of 1811 in-work exposure events with 745 index cases. We demonstrate that male sex, Eastern European nationality, exposure location, PPE use and vaccination status all impact the likelihood of SARS-CoV-2 infection following nosocomial SARS-CoV-2 exposure. The findings draw attention to the need for continuing emphasis on PPE use and its persisting benefit in the era of COVID-19 vaccinations. We suggest that non-work-related factors may influence infection risk seen in certain ethnic groups and that infection risk in high-risk HCW roles (e.g. nursing) may be the result of repeated exposures rather than risks inherent to a single event

    Investigation into the safety, and serological responses elicited by delivery of live intranasal vaccines for bovine herpes virus type 1, bovine respiratory syncytial virus, and parainfluenza type 3 in pre-weaned calves

    Get PDF
    Despite the fact that pneumonia remains a leading cause of mortality and morbidity in pre-weaned calves, relatively little is known regarding the effects of the concurrent administration of intranasal pneumonia virus vaccines, particularly in calves with high levels of maternally derived antibodies. The objective of this study was to use a cohort of 40 dairy and dairy-beef female and male calves (27 females and 13 males) to determine serological responses to concurrent administration at 3 weeks of age (22 ± 4.85 days) of two commercially available intranasal (IN) vaccines for the viruses: bovine respiratory syncytial virus (BRSV), bovine herpes virus 1 (BoHV-1), and parainfluenza-3-virus (PI3-V). The study groups were as follows: (i) Bovilis IBR Marker Live only® (IO), (ii) Bovilis INtranasal RSP Live® only (RPO), (iii) Concurrent vaccination with Bovilis IBR Marker Live® & Bovilis Intranasal RSP Live® (CV), and (iv) a control group of non-vaccinated calves (CONT). The calves’ serological response post-IN vaccination, clinical health scores, rectal temperatures, and weights were measured. Data were analyzed in SAS using mixed models and logistic regression. The CV calves had an average daily weight gain (ADG) of 0.74 (±0.02) kg, which was similar to CONT (0.77 ± 0.02 kg). Despite no significant differences in the antibody levels between study groups 3 weeks post-IN vaccination, following the administration of subsequent parenteral injections in the form of Bovilis Bovipast RSP®(antigens; inactivated BRSV, inactivated PI3-V, inactivated Mannheimia haemolytica) and Bovilis IBR Marker Live®, the antibody levels of the BRSV and PI3-V increased in both the CV and RPO study groups. Concurrent vaccination resulted in no increase in fever and no difference in health scores when compared to CONT

    Environmental Risk Factors Influence the Frequency of Coughing and Sneezing Episodes in Finisher Pigs on a Farm Free of Respiratory Disease

    Get PDF
    Inappropriate environmental conditions in pig buildings are detrimental for both pig and farm-staff health and welfare. With ongoing technological developments, a variety of sensor technology is available and can be used to measure environmental conditions such as air temperature, relative humidity, and ammonia and dust concentrations in real time. Moreover, a tool was recently developed to give farmers an objective assessment of pigs' respiratory health by continuously measuring coughing in finisher pigs. This study assessed baseline levels of coughing on a farm free of respiratory disease, and aimed to identify relationships between environmental conditions and coughing frequency in pigs. Six replicates were conducted. Coughing levels in healthy pigs were overall low, and coughing frequency can be predicted by environmental conditions such as high ammonia concentrations and high ventilation rates. Results of this study can be used as guidelines to determine normal coughing levels in healthy pigs, and to calibrate the alarm systems of tools that measure coughing frequency, such as the cough monitor used in this study. The collection and amalgamation of data from a variety of sources related to health, welfare, and performance are important in order to improve the efficiency and sustainability of the pig industry. This study aimed to assess baseline levels of coughing on a farm free of respiratory disease, and to identify relationships between environmental conditions and coughing frequency in finisher pigs. Six replicates were conducted (690 pigs in total). A cross-correlation analysis was performed and lags of the predictor variables were carried forward for multivariable regression analysis when significant and showing r > 0.25. Results show that coughing frequency was overall low. In the first replicate, coughing was best predicted by exposure to higher ammonia concentrations that occurred with a lag of 1, 7, and 15 days (p = 0.003, p = 0.001, and p −0.70). In conclusion, guidelines on coughing levels in healthy pigs and calibration of the alarm systems of tools that measure coughing frequency can be extrapolated from this study. Environmental risk factors are associated with the respiratory health of finisher pigs

    Seroprevalence of Mycoplasma bovis in bulk milk samples in Irish dairy herds and risk factors associated with herd seropositive status

    Get PDF
    Mycoplasma bovis is a serious disease of cattle worldwide; mastitis, pneumonia, and arthritis are particularly important clinical presentations in dairy herds. Mycoplasma bovis was first identified in Ireland in 1994, and the reporting of Mycoplasma-associated disease has substantially increased over the last 5 years. Despite the presumed endemic nature of M. bovis in Ireland, there is a paucity of data on the prevalence of infection, and the effect of this disease on the dairy industry. The aim of this observational study was to estimate apparent herd prevalence for M. bovis in Irish dairy herds using routinely collected bulk milk surveillance samples and to assess risk factors for herd seropositivity. In autumn 2018, 1,500 herds out of the 16,858 herds that submitted bulk tank milk (BTM) samples to the Department of Agriculture testing laboratory for routine surveillance were randomly selected for further testing. A final data set of 1,313 sampled herds with a BTM ELISA result were used for the analysis. Testing was conducted using an indirect ELISA kit (ID Screen Mycoplasma bovis). Herd-level risk factors were used as explanatory variables to determine potential risk factors associated with positive herd status (reflecting past or current exposure to M. bovis). A total of 588 of the 1,313 BTM samples were positive to M. bovis, providing an apparent herd prevalence of 0.45 (95% CI: 0.42, 0.47) in Irish dairy herds in autumn 2018. Multivariable analysis was conducted using logistic regression. The final model identified herd size, the number of neighboring farms, in-degree and county as statistically significant risk factors for herd BTM seropositivity to M. bovis. The results suggest a high apparent herd prevalence of seropositivity to M. bovis, and evidence that M. bovis infection is now endemic in the Irish dairy sector. In addition, risk factors identified are closely aligned to what we would expect of an infectious disease. Awareness raising and education about this important disease is warranted given the widespread nature of exposure and likely infection in Irish herds. Further work on the validation of diagnostic tests for herd-level diagnosis should be undertaken as a matter of priority.University College DublinScience Foundation IrelandWellcome Trust -- Submitted for publication after 1 Jan 2021: 0m embargo and CC-BY licenseHealth Research BoardUCD Wellcome Institutional Strategic Support FundSFI-HRB-Wellcome Biomedical Research Partnershi

    Survey of farm, parlour and milking management, parlour technologies, SCC control strategies and farmer demographics on Irish dairy farms

    Get PDF
    Abstract Background This cross-sectional study describes a survey designed to fill knowledge gaps regarding farm management practices, parlour management practices and implemented technologies, milking management practices, somatic cell count (SCC) control strategies, farmer demographics and attitudes around SCC management on a sample of Irish dairy farms. Results We categorized 376 complete responses by herd size quartile and calving pattern. The average respondent herd was 131 cows with most (82.2%) operating a seasonal calving system. The median monthly bulk tank somatic cell count for seasonal calving systems was 137,000 cells/ml (range 20,000 – 1,269,000 cells/ml), 170,000 cells/ml for split-calving systems (range 46,000 – 644,000 cells/ml) and 186,000 cells/ml for ‘other’ herds (range 20,000 – 664,000 cells/ml). The most common parlour types were swing-over herringbones (59.1%) and herringbones with recording jars (22.2%). The average number of units across herringbone parlours was 15, 49 in rotary parlours and two boxes on automatic milking system (AMS) farms. The most common parlour technologies were in-parlour feeding systems (84.5%), automatic washers on the bulk tank (72.8%), automatic cluster removers (57.9%), and entrance or exit gates controlled from the parlour pit (52.2%). Veterinary professionals, farming colleagues and processor milk quality advisors were the most commonly utilised sources of advice for SCC management (by 76.9%, 50.0% and 39.2% of respondents respectively). Conclusions In this study, we successfully utilised a national survey to quantify farm management practices, parlour management practices and technology adoption levels, milking management practices, SCC control strategies and farmer demographics on 376 dairy farms in the Republic of Ireland. Rotary and AMS parlours had the most parlour technologies of any parlour type. Technology add-ons were generally less prevalent on farms with smaller herds. Despite finding areas for improvement with regard to frequency of liner changes, glove-wearing practices and engagement with bacteriology of milk samples, we also found evidence of high levels of documentation of mastitis treatments and high use of post-milking teat disinfection. We discovered that Irish dairy farmers are relatively content in their careers but face pressures regarding changes to the legislation around prudent antimicrobial use in their herds

    Prevalence of respiratory disease in Irish preweaned dairy calves using hierarchical Bayesian latent class analysis

    Get PDF
    IntroductionBovine respiratory disease (BRD) has a significant impact on the health and welfare of dairy calves. It can result in increased antimicrobial usage, decreased growth rate and reduced future productivity. There is no gold standard antemortem diagnostic test for BRD in calves and no estimates of the prevalence of respiratory disease in seasonal calving dairy herds.MethodsTo estimate BRD prevalence in seasonal calving dairy herds in Ireland, 40 dairy farms were recruited and each farm was visited once during one of two calving seasons (spring 2020 & spring 2021). At that visit the prevalence of BRD in 20 calves between 4 and 6 weeks of age was determined using thoracic ultrasound score (≥3) and the Wisconsin respiratory scoring system (≥5). Hierarchical Bayesian latent class analysis was used to estimate the calf-level true prevalence of BRD, and the within-herd prevalence distribution, accounting for the imperfect nature of both diagnostic tests.ResultsIn total, 787 calves were examined, of which 58 (7.4%) had BRD as defined by a Wisconsin respiratory score ≥5 only, 37 (4.7%) had BRD as defined by a thoracic ultrasound score of ≥3 only and 14 (1.8%) calves had BRD based on both thoracic ultrasound and clinical scoring. The primary model assumed both tests were independent and used informed priors for test characteristics. Using this model the true prevalence of BRD was estimated as 4%, 95% Bayesian credible interval (BCI) (1%, 8%). This prevalence estimate is lower or similar to those found in other dairy production systems. Median within herd prevalence varied from 0 to 22%. The prevalence estimate was not sensitive to whether the model was constructed with the tests considered conditionally dependent or independent. When the case definition for thoracic ultrasound was changed to a score ≥2, the prevalence estimate increased to 15% (95% BCI: 6%, 27%).DiscussionThe prevalence of calf respiratory disease, however defined, was low, but highly variable, in these seasonal calving dairy herds

    Low accuracy of Bayesian latent class analysis for estimation of herd-level true prevalence under certain disease characteristics—An analysis using simulated data

    Get PDF
    Estimation of the true prevalence of infected individuals involves the application of a diagnostic test to a population and adjusting according to test performance, sensitivity and specificity. Bayesian latent class analysis for the estimation of herd and animal-level true prevalence, has become increasingly used in veterinary epidemiology and is particularly useful in incorporating uncertainty and variability into analyses in a flexible framework. However, the approach has not yet been evaluated using simulated data where the true prevalence is known. Furthermore, using this approach, the within-herd true prevalence is often assumed to follow a beta distribution, the parameters of which may be modelled using hyperpriors to incorporate both uncertainty and variability associated with this parameter. Recently however, the authors of the current study highlighted a potential issue with this approach, in particular, with fitting the distributions and a tendency for the resulting distribution to invert and become clustered at zero. Therefore, the objective of the present study was to evaluate commonly specified models using simulated datasets where the herd-level true prevalence was known. The specific purpose was to compare findings from models using hyperpriors to those using a simple beta distribution to model within-herd prevalence. A second objective was to investigate sources of error by varying characteristics of the simulated dataset. Mycobacterium avium subspecies paratuberculosis infection was used as an example for the baseline dataset. Data were simulated for 1000 herds across a range of herd-level true prevalence scenarios, and models were fitted using priors from recently published studies. The results demonstrated poor performance of these latent class models for diseases characterised by poor diagnostic test sensitivity and low within-herd true prevalence. All variations of the model appeared to be sensitive to the prior and tended to overestimate herd-level true prevalence. Estimates were substantially improved in different infection scenarios by increasing test sensitivity and within-herd true prevalence. The results of this study raise questions about the accuracy of published estimates for the herd-level true prevalence of paratuberculosis based on serological testing, using latent class analysis. This study highlights the importance of conducting more rigorous sensitivity analyses than have been carried out in previous analyses published to date
    corecore